Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2020, Cilt: 5 Sayı: 3, 138 - 143, 01.10.2020
https://doi.org/10.26833/ijeg.645426

Öz

Kaynakça

  • Akbulut, Z., Ozdemir, S., Acar, H., Dihkan, M., & Karsli, F. (2018). Automatic extraction of building boundaries from high resolution images with active contour segmentation. International Journal of Engineering and Geosciences, 3(1), 37-42.
  • Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12), 2481-2495.
  • Bozkurt S. (2018). Derin Ogrenme Algoritmalari Kullanilarak Cay Alanlarının Otomatik Segmentasyonu (Master’s Thesis). YTU, Istanbul.
  • Chen, Q., Wang, L., Wu, Y., Wu, G., Guo, Z., & Waslander, S. L. (2019). Aerial imagery for roof segmentation: A large-scale dataset towards automatic mapping of buildings. ISPRS journal of photogrammetry and remote sensing, 147, 42-55.
  • Comert, R., Kucuk, D., & Avdan, U. (2019). Object Based Burned Area Mapping with Random Forest Algorithm. International Journal of Engineering and Geosciences, 4(2), 78-87.
  • Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke U., Roth S. & Schiele, B. (2016). The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3213-3223).
  • Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & FeiFei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255). IEEE.
  • De Souza W. (2017). Semantic Segmentation using Fully Convolutional Neural Networks. Retrieved 19.03.2020, from https://medium.com/@wilburdes/semanticsegmentation-using-fully-convolutional-neuralnetworks-86e45336f99b
  • Du, Z., Yang, J., Huang, W., & Ou, C. (2018). Training SegNet for cropland classification of high resolution remote sensing images. In AGILE Conference.
  • Everingham, M., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2010). The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2), 303-338.
  • Guo, Z., Shao, X., Xu, Y., Miyazaki, H., Ohira, W., & Shibasaki, R. (2016). Identification of village building via Google Earth images and supervised machine learning methods. Remote Sensing, 8(4), 271.
  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
  • LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., & Jackel, L. D. (1989). Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4), 541-551.
  • Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar P. & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In European conference on computer vision (pp. 740-755). Springer, Cham.
  • Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440).
  • Ma, L., Li, M., Ma, X., Cheng, L., Du, P., & Liu, Y. (2017). A review of supervised object-based land-cover image classification. ISPRS Journal of Photogrammetry and Remote Sensing, 130, 277-293.
  • Maggiori, E., Tarabalka, Y., Charpiat, G., & Alliez, P. (2017). Can semantic labeling methods generalize to any city? the inria aerial image labeling benchmark. In 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) (pp. 3226-3229). IEEE.
  • Sevgen, S. C. (2019). Airborne lidar data classification in complex urban area using random forest: a case study of Bergama, Turkey. International Journal of Engineering and Geosciences, 4(1), 45-51.
  • Tasdemir, S., & Ozkan, I. A. (2019). Ann approach for estimation of cow weight depending on photogrammetric body dimensions. International Journal of Engineering and Geosciences, 4(1), 36-44.
  • URL-1, 2012, http://www.imagenet.org/challenges/LSVRC/2012/results.html, [26.03.2020]
  • URL-2, 2017, https://meetshah1995.github.io/semanticsegmentation/deeplearning/pytorch/visdom/2017/06/01/semanticsegmentation-over-the-years.html, [19.03.2020].
  • URL-3, 2020, https://towardsdatascience.com/implementing-a-fullyconvolutional-network-fcn-in-tensorflow-2-3c46fb61de3b, [19.03.2020].
  • URL-4, http://www.deeplearning.net/tutorial/fcn_2D_segm.html, [19.03.2020]
  • Vakalopoulou, M., Karantzalos, K., Komodakis, N., & Paragios, N. (2015). Building detection in very high resolution multispectral data with deep learning features. In 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) (pp. 1873-1876). IEEE.
  • Wu, G., Guo, Z., Shi, X., Chen, Q., Xu, Y., Shibasaki, R., & Shao, X. (2018a). A boundary regulated network for accurate roof segmentation and outline extraction. Remote Sensing, 10(8), 1195.
  • Wu, G., Shao, X., Guo, Z., Chen, Q., Yuan, W., Shi, X., Xu Y. & Shibasaki, R. (2018b). Automatic building segmentation of aerial imagery using multi-constraint fully convolutional networks. Remote Sensing, 10(3), 407.

Feature extraction from satellite images using segnet and fully convolutional networks (FCN)

Yıl 2020, Cilt: 5 Sayı: 3, 138 - 143, 01.10.2020
https://doi.org/10.26833/ijeg.645426

Öz

Object detection and classification are among the most popular topics in Photogrammetry and Remote Sensing studies. With technological developments, a large number of high-resolution satellite images have been obtained and it has become possible to distinguish many different objects. Despite all these developments, the need for human intervention in object detection and classification is seen as one of the major problems. Machine learning has been used as a priority option to this day to reduce this need. Although success has been achieved with this method, human intervention is still needed. Deep learning provides a great convenience by eliminating this problem. Deep learning methods carry out the learning process on raw data unlike traditional machine learning methods. Although deep learning has a long history, the main reasons for its increased popularity in recent years are; the availability of sufficient data for the training process and the availability of hardware to process the data. In this study, a performance comparison was made between two different convolutional neural network architectures (SegNet and Fully Convolutional Networks (FCN)) which are used for object segmentation and classification on images. These two different models were trained using the same training dataset and their performances have been evaluated using the same test dataset. The results show that, for building segmentation, there is not much significant difference between these two architectures in terms of accuracy, but FCN architecture is more successful than SegNet by 1%. However, this situation may vary according to the dataset used during the training of the system.

Kaynakça

  • Akbulut, Z., Ozdemir, S., Acar, H., Dihkan, M., & Karsli, F. (2018). Automatic extraction of building boundaries from high resolution images with active contour segmentation. International Journal of Engineering and Geosciences, 3(1), 37-42.
  • Badrinarayanan, V., Kendall, A., & Cipolla, R. (2017). Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intelligence, 39(12), 2481-2495.
  • Bozkurt S. (2018). Derin Ogrenme Algoritmalari Kullanilarak Cay Alanlarının Otomatik Segmentasyonu (Master’s Thesis). YTU, Istanbul.
  • Chen, Q., Wang, L., Wu, Y., Wu, G., Guo, Z., & Waslander, S. L. (2019). Aerial imagery for roof segmentation: A large-scale dataset towards automatic mapping of buildings. ISPRS journal of photogrammetry and remote sensing, 147, 42-55.
  • Comert, R., Kucuk, D., & Avdan, U. (2019). Object Based Burned Area Mapping with Random Forest Algorithm. International Journal of Engineering and Geosciences, 4(2), 78-87.
  • Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke U., Roth S. & Schiele, B. (2016). The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3213-3223).
  • Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & FeiFei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255). IEEE.
  • De Souza W. (2017). Semantic Segmentation using Fully Convolutional Neural Networks. Retrieved 19.03.2020, from https://medium.com/@wilburdes/semanticsegmentation-using-fully-convolutional-neuralnetworks-86e45336f99b
  • Du, Z., Yang, J., Huang, W., & Ou, C. (2018). Training SegNet for cropland classification of high resolution remote sensing images. In AGILE Conference.
  • Everingham, M., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2010). The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2), 303-338.
  • Guo, Z., Shao, X., Xu, Y., Miyazaki, H., Ohira, W., & Shibasaki, R. (2016). Identification of village building via Google Earth images and supervised machine learning methods. Remote Sensing, 8(4), 271.
  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
  • LeCun, Y., Boser, B., Denker, J. S., Henderson, D., Howard, R. E., Hubbard, W., & Jackel, L. D. (1989). Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4), 541-551.
  • Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar P. & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In European conference on computer vision (pp. 740-755). Springer, Cham.
  • Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3431-3440).
  • Ma, L., Li, M., Ma, X., Cheng, L., Du, P., & Liu, Y. (2017). A review of supervised object-based land-cover image classification. ISPRS Journal of Photogrammetry and Remote Sensing, 130, 277-293.
  • Maggiori, E., Tarabalka, Y., Charpiat, G., & Alliez, P. (2017). Can semantic labeling methods generalize to any city? the inria aerial image labeling benchmark. In 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) (pp. 3226-3229). IEEE.
  • Sevgen, S. C. (2019). Airborne lidar data classification in complex urban area using random forest: a case study of Bergama, Turkey. International Journal of Engineering and Geosciences, 4(1), 45-51.
  • Tasdemir, S., & Ozkan, I. A. (2019). Ann approach for estimation of cow weight depending on photogrammetric body dimensions. International Journal of Engineering and Geosciences, 4(1), 36-44.
  • URL-1, 2012, http://www.imagenet.org/challenges/LSVRC/2012/results.html, [26.03.2020]
  • URL-2, 2017, https://meetshah1995.github.io/semanticsegmentation/deeplearning/pytorch/visdom/2017/06/01/semanticsegmentation-over-the-years.html, [19.03.2020].
  • URL-3, 2020, https://towardsdatascience.com/implementing-a-fullyconvolutional-network-fcn-in-tensorflow-2-3c46fb61de3b, [19.03.2020].
  • URL-4, http://www.deeplearning.net/tutorial/fcn_2D_segm.html, [19.03.2020]
  • Vakalopoulou, M., Karantzalos, K., Komodakis, N., & Paragios, N. (2015). Building detection in very high resolution multispectral data with deep learning features. In 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS) (pp. 1873-1876). IEEE.
  • Wu, G., Guo, Z., Shi, X., Chen, Q., Xu, Y., Shibasaki, R., & Shao, X. (2018a). A boundary regulated network for accurate roof segmentation and outline extraction. Remote Sensing, 10(8), 1195.
  • Wu, G., Shao, X., Guo, Z., Chen, Q., Yuan, W., Shi, X., Xu Y. & Shibasaki, R. (2018b). Automatic building segmentation of aerial imagery using multi-constraint fully convolutional networks. Remote Sensing, 10(3), 407.
Toplam 27 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Bölüm Articles
Yazarlar

Batuhan Sariturk 0000-0001-8777-4436

Bulent Bayram 0000-0002-4248-116X

Zaide Duran 0000-0002-1608-0119

Dursun Zafer Seker 0000-0001-7498-1540

Yayımlanma Tarihi 1 Ekim 2020
Yayımlandığı Sayı Yıl 2020 Cilt: 5 Sayı: 3

Kaynak Göster

APA Sariturk, B., Bayram, B., Duran, Z., Seker, D. Z. (2020). Feature extraction from satellite images using segnet and fully convolutional networks (FCN). International Journal of Engineering and Geosciences, 5(3), 138-143. https://doi.org/10.26833/ijeg.645426
AMA Sariturk B, Bayram B, Duran Z, Seker DZ. Feature extraction from satellite images using segnet and fully convolutional networks (FCN). IJEG. Ekim 2020;5(3):138-143. doi:10.26833/ijeg.645426
Chicago Sariturk, Batuhan, Bulent Bayram, Zaide Duran, ve Dursun Zafer Seker. “Feature Extraction from Satellite Images Using Segnet and Fully Convolutional Networks (FCN)”. International Journal of Engineering and Geosciences 5, sy. 3 (Ekim 2020): 138-43. https://doi.org/10.26833/ijeg.645426.
EndNote Sariturk B, Bayram B, Duran Z, Seker DZ (01 Ekim 2020) Feature extraction from satellite images using segnet and fully convolutional networks (FCN). International Journal of Engineering and Geosciences 5 3 138–143.
IEEE B. Sariturk, B. Bayram, Z. Duran, ve D. Z. Seker, “Feature extraction from satellite images using segnet and fully convolutional networks (FCN)”, IJEG, c. 5, sy. 3, ss. 138–143, 2020, doi: 10.26833/ijeg.645426.
ISNAD Sariturk, Batuhan vd. “Feature Extraction from Satellite Images Using Segnet and Fully Convolutional Networks (FCN)”. International Journal of Engineering and Geosciences 5/3 (Ekim 2020), 138-143. https://doi.org/10.26833/ijeg.645426.
JAMA Sariturk B, Bayram B, Duran Z, Seker DZ. Feature extraction from satellite images using segnet and fully convolutional networks (FCN). IJEG. 2020;5:138–143.
MLA Sariturk, Batuhan vd. “Feature Extraction from Satellite Images Using Segnet and Fully Convolutional Networks (FCN)”. International Journal of Engineering and Geosciences, c. 5, sy. 3, 2020, ss. 138-43, doi:10.26833/ijeg.645426.
Vancouver Sariturk B, Bayram B, Duran Z, Seker DZ. Feature extraction from satellite images using segnet and fully convolutional networks (FCN). IJEG. 2020;5(3):138-43.

Cited By