Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2020, Cilt: 7 Sayı: 3, 272 - 279, 06.12.2020
https://doi.org/10.30897/ijegeo.737993

Öz

Kaynakça

  • Badrinarayanan, V., Kendall, A., Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12), 2481-2495.
  • Barghout, L., Lee, L. (2004). U.S. Patent Application No. 10/618,543.
  • Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A. L. (2014). Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062.
  • Cheng, G., Han, J. (2016). A survey on object detection in optical remote sensing images. ISPRS Journal of Photogrammetry and Remote Sensing, 117, 11-28.
  • Cheng, D., Liao, R., Fidler, S., Urtasun, R. (2019). Darnet: Deep active ray network for building segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 7431-7439).
  • Chollet, F. (2018). Deep Learning with Python. Manning.
  • Digitalogy (2020). The Difference Between Artificial Intelligence, Machine Learning, And Deep Learning. Retrieved 15 May 2020 from https://blog.digitalogy.co/the-difference-between-artificial-intelligence-machine-learning-and-deep-learning/
  • Guo, Y., Liu, Y., Oerlemans, A., Lao, S., Wu, S., and Lew, M. S. (2016). Deep learning for visual understanding: A review. Neurocomputing, 187, 27-48.
  • Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., ..., Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
  • Mnih, V. (2013). Machine learning for aerial image labelling (PhD thesis). University of Toronto, Toronto, Canada.
  • Minaee, S., Boykov, Y., Porikli, F., Plaza, A., Kehtarnavaz, N., Terzopoulos, D. (2020). Image Segmentation Using Deep Learning: A Survey. arXiv preprint arXiv:2001.05566.
  • Nweke, H. F., Teh, Y. W., Al-Garadi, M. A., Alo, U. R. (2018). Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges. Expert Systems with Applications, 105, 233-261.
  • Ronneberger, O., Fischer, P., Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 234-241). Springer, Cham.
  • Long, J., Shelhamer, E., Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440).
  • Liu, W., Rabinovich, A., Berg, A. C. (2015). Parsenet: Looking wider to see better. arXiv preprint arXiv:1506.04579.
  • Wang, G., Li, W., Ourselin, S., Vercauteren, T. (2017, September). Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks. In International MICCAI brainlesion workshop (pp. 178-190). Springer, Cham.

Comparison of Fully Convolutional Networks (FCN) and U-Net for Road Segmentation from High Resolution Imageries

Yıl 2020, Cilt: 7 Sayı: 3, 272 - 279, 06.12.2020
https://doi.org/10.30897/ijegeo.737993

Öz

Segmentation is one of the most popular classification techniques which still have semantic labels. In this context, the segmentation of different objects such as cars, airplanes, ships, and buildings that are independent of background and objects such as land use and vegetation classes, which are difficult to discriminate from the background is considered. However, in image segmentation studies, various difficulties such as shadow, image blockage, a disorder of background, lighting, shading that cause fundamental modifications in the appearance of features are often encountered. With the development of technology, obtaining high spatial resolution satellite imageries and aerial photographs contain detailed texture information have been facilitated easily. Parallel to these improvements, deep learning architectures have widely been used to solved several computer vision tasks with an increasing level of difficulty. Thus, the regional characteristics, artificial and natural objects, can be perceived and interpreted precisely. In this study, two different subset data that were produced from a great open-source labeled image sets were used to segmentation of roads. The used labeled data set consists of 150 satellite images of size 1500 x 1500 pixels at a 1.2 m resolution, which was not efficient for training. In order to avoid any problem, the imageries were divided into smaller dimensions. Selected images from the data set divided into small patches of 256 x 256 pixels and 512 x 512 pixels to train the system, and comparisons between them were carried out. To train the system using these datasets, two different artificial neural network architectures U-Net and Fully Convolutional Networks (FCN), which are used for object segmentation on high-resolution images, were selected. When the test data with the same size as the training data set were analyzed, approximately 97% extraction accuracy was obtained from high-resolution imageries trained by FCN in 512 x 512 dimensions.

Kaynakça

  • Badrinarayanan, V., Kendall, A., Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12), 2481-2495.
  • Barghout, L., Lee, L. (2004). U.S. Patent Application No. 10/618,543.
  • Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A. L. (2014). Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062.
  • Cheng, G., Han, J. (2016). A survey on object detection in optical remote sensing images. ISPRS Journal of Photogrammetry and Remote Sensing, 117, 11-28.
  • Cheng, D., Liao, R., Fidler, S., Urtasun, R. (2019). Darnet: Deep active ray network for building segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 7431-7439).
  • Chollet, F. (2018). Deep Learning with Python. Manning.
  • Digitalogy (2020). The Difference Between Artificial Intelligence, Machine Learning, And Deep Learning. Retrieved 15 May 2020 from https://blog.digitalogy.co/the-difference-between-artificial-intelligence-machine-learning-and-deep-learning/
  • Guo, Y., Liu, Y., Oerlemans, A., Lao, S., Wu, S., and Lew, M. S. (2016). Deep learning for visual understanding: A review. Neurocomputing, 187, 27-48.
  • Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., ..., Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
  • Mnih, V. (2013). Machine learning for aerial image labelling (PhD thesis). University of Toronto, Toronto, Canada.
  • Minaee, S., Boykov, Y., Porikli, F., Plaza, A., Kehtarnavaz, N., Terzopoulos, D. (2020). Image Segmentation Using Deep Learning: A Survey. arXiv preprint arXiv:2001.05566.
  • Nweke, H. F., Teh, Y. W., Al-Garadi, M. A., Alo, U. R. (2018). Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges. Expert Systems with Applications, 105, 233-261.
  • Ronneberger, O., Fischer, P., Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 234-241). Springer, Cham.
  • Long, J., Shelhamer, E., Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440).
  • Liu, W., Rabinovich, A., Berg, A. C. (2015). Parsenet: Looking wider to see better. arXiv preprint arXiv:1506.04579.
  • Wang, G., Li, W., Ourselin, S., Vercauteren, T. (2017, September). Automatic brain tumor segmentation using cascaded anisotropic convolutional neural networks. In International MICCAI brainlesion workshop (pp. 178-190). Springer, Cham.
Toplam 16 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Mühendislik
Bölüm Research Articles
Yazarlar

Ozan Ozturk 0000-0002-5979-6360

Batuhan Sarıtürk 0000-0001-8777-4436

Dursun Zafer Seker 0000-0001-7498-1540

Yayımlanma Tarihi 6 Aralık 2020
Yayımlandığı Sayı Yıl 2020 Cilt: 7 Sayı: 3

Kaynak Göster

APA Ozturk, O., Sarıtürk, B., & Seker, D. Z. (2020). Comparison of Fully Convolutional Networks (FCN) and U-Net for Road Segmentation from High Resolution Imageries. International Journal of Environment and Geoinformatics, 7(3), 272-279. https://doi.org/10.30897/ijegeo.737993

Cited By