Araştırma Makalesi
BibTex RIS Kaynak Göster

Havasal Görüntülerdeki Sahnelerin Derin Öğrenme Modelleri ile Sınıflandırılması

Yıl 2023, , 37 - 43, 27.03.2023
https://doi.org/10.46810/tdfd.1225756

Öz

Havadan alınan görüntülerin otomatik olarak sınıflandırılması son yıllarda üzerinde yoğun çalışılan konulardan biri haline gelmiştir. Özellikle drone'ların tarımsal uygulamalar, akıllı şehir uygulamaları, gözetleme ve güvenlik uygulamaları gibi farklı alanlarda kullanımı için otonom görev icrası sırasında kamera ile elde edilen görüntülerin otomatik olarak sınıflandırılması gerekmektedir. Bu amaçla araştırmacılar yeni veri setleri oluşturmuş ve yüksek doğruluk elde etmek için bazı bilgisayarla görme yöntemleri geliştirilmiştir. Ancak geliştirilen yöntemlerin doğruluğunun artırılmasının yanı sıra hesaplama karmaşıklığının da azaltılması gerekmektedir. Çünkü drone gibi enerji tüketiminin önemli olduğu cihazlarda kullanılacak yöntemlerin düşük hesaplama karmaşıklığına sahip olması gerekmektedir. Bu çalışmada, öncelikle hava görüntülerinin sınıflandırılmasında yüksek doğruluk değerleri elde etmek için beş farklı derin öğrenme modeli kullanılmıştır. Bu modeller arasında en yüksek doğruluğu %94.21 ile VGG19 modeli elde etmiştir. Çalışmanın ikinci bölümünde bu modelin parametreleri analiz edilerek model yeniden yapılandırılmıştır. VGG19 modelinin 143,6 milyon olan parametre sayısı 34 milyona düşürülmüştür. Parametre sayısının azaltılmasıyla elde edilen modelin doğruluğu aynı test verileri üzerinde %93,56'dır. Böylece parametre oranındaki %66,5'lik azalmaya rağmen doğruluk değerinde sadece %0,7'lik bir azalma olmuştur. Elde edilen sonuçlar önceki çalışmalarla karşılaştırıldığında, daha iyi sonuçların elde edildiği görülmüştür.

Kaynakça

  • 1. Zou, Q., et al., Deep learning based feature selection for remote sensing scene classification. IEEE Geoscience and Remote Sensing Letters, 2015. 12(11): p. 2321-2325.
  • 2. Xia, G.-S., et al. Structural high-resolution satellite image indexing. in ISPRS TC VII Symposium-100 Years ISPRS. 2010.
  • 3. Yang, Y. and S. Newsam. Bag-of-visual-words and spatial extensions for land-use classification. in Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems. 2010.
  • 4. Cheng, G., J. Han, and X. Lu, Remote sensing image scene classification: Benchmark and state of the art. Proceedings of the IEEE, 2017. 105(10): p. 1865-1883.
  • 5. Xia, G.-S., et al., AID: A benchmark data set for performance evaluation of aerial scene classification. IEEE Transactions on Geoscience and Remote Sensing, 2017. 55(7): p. 3965-3981.
  • 6. Minu, M. and R.A. Canessane, Deep learning-based aerial image classification model using inception with residual network and multilayer perceptron. Microprocessors and Microsystems, 2022. 95: p. 104652.
  • 7. Zhu, R., et al., Semi-supervised center-based discriminative adversarial learning for cross-domain scene-level land-cover classification of aerial images. ISPRS Journal of Photogrammetry and Remote Sensing, 2019. 155: p. 72-89.
  • 8. Hua, Y., et al., Aerial scene understanding in the wild: Multi-scene recognition via prototype-based memory networks. ISPRS Journal of Photogrammetry and Remote Sensing, 2021. 177: p. 89-102.
  • 9. Pritt, M. and G. Chern. Satellite image classification with deep learning. in 2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). 2017. IEEE.
  • 10. Arefeen, M.A., et al. A lightweight relu-based feature fusion for aerial scene classification. in 2021 IEEE International Conference on Image Processing (ICIP). 2021. IEEE.
  • 11. Bi, Q., et al., All Grains, One Scheme (AGOS): Learning Multi-grain Instance Representation for Aerial Scene Classification. arXiv preprint arXiv:2205.03371, 2022.
  • 12. Yi, J. and B. Zhou, Learning Instance Representation Banks for Aerial Scene Classification. arXiv preprint arXiv:2205.13744, 2022.
  • 13. İnik, Ö., CNN hyper-parameter optimization for environmental sound classification. Applied Acoustics, 2023. 202: p. 109168.
  • 14. Falaschetti, L., et al., A CNN-based image detector for plant leaf diseases classification. HardwareX, 2022. 12: p. e00363.
  • 15. Girshick, R. Fast r-cnn. in Proceedings of the IEEE international conference on computer vision. 2015.
  • 16. İnik, Ö., et al., A new method for automatic counting of ovarian follicles on whole slide histological images based on convolutional neural network. Computers in biology and medicine, 2019. 112: p. 103350.
  • 17. İni̇k, Ö., et al., MODE-CNN: A fast converging multi-objective optimization algorithm for CNN-based models. Applied Soft Computing, 2021. 109: p. 107582.
  • 18. Inik, Ö. and E. Ülker, Optimization of deep learning based segmentation method. Soft Computing, 2022. 26(7): p. 3329-3344.
  • 19. Ronneberger, O., P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. 2015. Springer.
  • 20. Genze, N., et al., Deep learning-based early weed segmentation using motion blurred UAV images of sorghum fields. Computers and Electronics in Agriculture, 2022. 202: p. 107388.
  • 21. Orhan, İ., et al., Soil Temperature Prediction with Long Short Term Memory (LSTM). Türk Tarım ve Doğa Bilimleri Dergisi. 9(3): p. 779-785.
  • 22. Mondal, M., et al., Adaptive CNN filter pruning using global importance metric. Computer Vision and Image Understanding, 2022. 222: p. 103511.
  • 23. Pattanayak, S., S. Nag, and S. Mittal, CURATING: A multi-objective based pruning technique for CNNs. Journal of Systems Architecture, 2021. 116: p. 102031.
  • 24. Ide, H., et al., Robust pruning for efficient CNNs. Pattern Recognition Letters, 2020. 135: p. 90-98.
  • 25. Yang, C. and H. Liu, Channel pruning based on convolutional neural network sensitivity. Neurocomputing, 2022. 507: p. 97-106.
  • 26. Szegedy, C., et al. Going deeper with convolutions. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
  • 27. Krizhevsky, A., I. Sutskever, and G.E. Hinton, Imagenet classification with deep convolutional neural networks. Communications of the ACM, 2017. 60(6): p. 84-90.
  • 28. Simonyan, K. and A. Zisserman, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • 29. He, K., et al. Deep residual learning for image recognition. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
  • 30. Matlab_2022b. Get Started with Deep Network Designer. 2022 [cited 2022 23.12.2022]; Available from: https://www.mathworks.com/help/deeplearning/gs/get-started-with-deep-network-designer.html.
  • 31. Han, X., et al., Pre-trained alexnet architecture with pyramid pooling and supervision for high spatial resolution remote sensing image scene classification. Remote Sensing, 2017. 9(8): p. 848.
  • 32. Anwer, R.M., et al., Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification. ISPRS journal of photogrammetry and remote sensing, 2018. 138: p. 74-85.
  • 33. Ilse, M., J. Tomczak, and M. Welling. Attention-based deep multiple instance learning. in International conference on machine learning. 2018. PMLR.
  • 34. Bi, Q., et al., A multiple-instance densely-connected ConvNet for aerial scene classification. IEEE Transactions on Image Processing, 2020. 29: p. 4911-4926.
  • 35. Bi, Q., et al., RADC-Net: A residual attention based convolution network for aerial scene classification. Neurocomputing, 2020. 377: p. 345-359.
  • 36. Cao, R., et al., Self-attention-based deep feature fusion for remote sensing scene classification. IEEE Geoscience and Remote Sensing Letters, 2020. 18(1): p. 43-47.
  • 37. Cheng, G., et al., When deep learning meets metric learning: Remote sensing image scene classification via learning discriminative CNNs. IEEE transactions on geoscience and remote sensing, 2018. 56(5): p. 2811-2821.

Classification of Scenes in Aerial Images with Deep Learning Models

Yıl 2023, , 37 - 43, 27.03.2023
https://doi.org/10.46810/tdfd.1225756

Öz

Automatic classification of aerial images has become one of the topics studied in recent years. Especially for the use of drones in different fields such as agricultural applications, smart city applications, surveillance and security applications, it is necessary to automatically classify the images obtained with the camera during autonomous mission execution. For this purpose, researchers have created new data sets and some computer vision methods have been developed to achieve high accuracy. However, in addition to increasing the accuracy of the developed methods, the computational complexity should also be reduced. Because the methods to be used in devices such as drones where energy consumption is important should have low computational complexity. In this study, firstly, five different state-of-art deep learning models were used to obtain high accuracy values in the classification of aerial images. Among these models, the VGG19 model achieved the highest accuracy with 94.21%. In the second part of the study, the parameters of this model were analyzed and the model was reconstructed. The number of 143.6 million parameters of the VGG19 model was reduced to 34 million. The accuracy of the model obtained by reducing the number of parameters is 93.56% on the same test data. Thus, despite the 66.5% decrease in the parameter ratio, there was only a 0.7% decrease in the accuracy value. When compared to previous studies, the results show improved performance.

Kaynakça

  • 1. Zou, Q., et al., Deep learning based feature selection for remote sensing scene classification. IEEE Geoscience and Remote Sensing Letters, 2015. 12(11): p. 2321-2325.
  • 2. Xia, G.-S., et al. Structural high-resolution satellite image indexing. in ISPRS TC VII Symposium-100 Years ISPRS. 2010.
  • 3. Yang, Y. and S. Newsam. Bag-of-visual-words and spatial extensions for land-use classification. in Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems. 2010.
  • 4. Cheng, G., J. Han, and X. Lu, Remote sensing image scene classification: Benchmark and state of the art. Proceedings of the IEEE, 2017. 105(10): p. 1865-1883.
  • 5. Xia, G.-S., et al., AID: A benchmark data set for performance evaluation of aerial scene classification. IEEE Transactions on Geoscience and Remote Sensing, 2017. 55(7): p. 3965-3981.
  • 6. Minu, M. and R.A. Canessane, Deep learning-based aerial image classification model using inception with residual network and multilayer perceptron. Microprocessors and Microsystems, 2022. 95: p. 104652.
  • 7. Zhu, R., et al., Semi-supervised center-based discriminative adversarial learning for cross-domain scene-level land-cover classification of aerial images. ISPRS Journal of Photogrammetry and Remote Sensing, 2019. 155: p. 72-89.
  • 8. Hua, Y., et al., Aerial scene understanding in the wild: Multi-scene recognition via prototype-based memory networks. ISPRS Journal of Photogrammetry and Remote Sensing, 2021. 177: p. 89-102.
  • 9. Pritt, M. and G. Chern. Satellite image classification with deep learning. in 2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR). 2017. IEEE.
  • 10. Arefeen, M.A., et al. A lightweight relu-based feature fusion for aerial scene classification. in 2021 IEEE International Conference on Image Processing (ICIP). 2021. IEEE.
  • 11. Bi, Q., et al., All Grains, One Scheme (AGOS): Learning Multi-grain Instance Representation for Aerial Scene Classification. arXiv preprint arXiv:2205.03371, 2022.
  • 12. Yi, J. and B. Zhou, Learning Instance Representation Banks for Aerial Scene Classification. arXiv preprint arXiv:2205.13744, 2022.
  • 13. İnik, Ö., CNN hyper-parameter optimization for environmental sound classification. Applied Acoustics, 2023. 202: p. 109168.
  • 14. Falaschetti, L., et al., A CNN-based image detector for plant leaf diseases classification. HardwareX, 2022. 12: p. e00363.
  • 15. Girshick, R. Fast r-cnn. in Proceedings of the IEEE international conference on computer vision. 2015.
  • 16. İnik, Ö., et al., A new method for automatic counting of ovarian follicles on whole slide histological images based on convolutional neural network. Computers in biology and medicine, 2019. 112: p. 103350.
  • 17. İni̇k, Ö., et al., MODE-CNN: A fast converging multi-objective optimization algorithm for CNN-based models. Applied Soft Computing, 2021. 109: p. 107582.
  • 18. Inik, Ö. and E. Ülker, Optimization of deep learning based segmentation method. Soft Computing, 2022. 26(7): p. 3329-3344.
  • 19. Ronneberger, O., P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. 2015. Springer.
  • 20. Genze, N., et al., Deep learning-based early weed segmentation using motion blurred UAV images of sorghum fields. Computers and Electronics in Agriculture, 2022. 202: p. 107388.
  • 21. Orhan, İ., et al., Soil Temperature Prediction with Long Short Term Memory (LSTM). Türk Tarım ve Doğa Bilimleri Dergisi. 9(3): p. 779-785.
  • 22. Mondal, M., et al., Adaptive CNN filter pruning using global importance metric. Computer Vision and Image Understanding, 2022. 222: p. 103511.
  • 23. Pattanayak, S., S. Nag, and S. Mittal, CURATING: A multi-objective based pruning technique for CNNs. Journal of Systems Architecture, 2021. 116: p. 102031.
  • 24. Ide, H., et al., Robust pruning for efficient CNNs. Pattern Recognition Letters, 2020. 135: p. 90-98.
  • 25. Yang, C. and H. Liu, Channel pruning based on convolutional neural network sensitivity. Neurocomputing, 2022. 507: p. 97-106.
  • 26. Szegedy, C., et al. Going deeper with convolutions. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
  • 27. Krizhevsky, A., I. Sutskever, and G.E. Hinton, Imagenet classification with deep convolutional neural networks. Communications of the ACM, 2017. 60(6): p. 84-90.
  • 28. Simonyan, K. and A. Zisserman, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
  • 29. He, K., et al. Deep residual learning for image recognition. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
  • 30. Matlab_2022b. Get Started with Deep Network Designer. 2022 [cited 2022 23.12.2022]; Available from: https://www.mathworks.com/help/deeplearning/gs/get-started-with-deep-network-designer.html.
  • 31. Han, X., et al., Pre-trained alexnet architecture with pyramid pooling and supervision for high spatial resolution remote sensing image scene classification. Remote Sensing, 2017. 9(8): p. 848.
  • 32. Anwer, R.M., et al., Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification. ISPRS journal of photogrammetry and remote sensing, 2018. 138: p. 74-85.
  • 33. Ilse, M., J. Tomczak, and M. Welling. Attention-based deep multiple instance learning. in International conference on machine learning. 2018. PMLR.
  • 34. Bi, Q., et al., A multiple-instance densely-connected ConvNet for aerial scene classification. IEEE Transactions on Image Processing, 2020. 29: p. 4911-4926.
  • 35. Bi, Q., et al., RADC-Net: A residual attention based convolution network for aerial scene classification. Neurocomputing, 2020. 377: p. 345-359.
  • 36. Cao, R., et al., Self-attention-based deep feature fusion for remote sensing scene classification. IEEE Geoscience and Remote Sensing Letters, 2020. 18(1): p. 43-47.
  • 37. Cheng, G., et al., When deep learning meets metric learning: Remote sensing image scene classification via learning discriminative CNNs. IEEE transactions on geoscience and remote sensing, 2018. 56(5): p. 2811-2821.
Toplam 37 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Özkan İnik 0000-0003-4728-8438

Yayımlanma Tarihi 27 Mart 2023
Yayımlandığı Sayı Yıl 2023

Kaynak Göster

APA İnik, Ö. (2023). Classification of Scenes in Aerial Images with Deep Learning Models. Türk Doğa Ve Fen Dergisi, 12(1), 37-43. https://doi.org/10.46810/tdfd.1225756
AMA İnik Ö. Classification of Scenes in Aerial Images with Deep Learning Models. TDFD. Mart 2023;12(1):37-43. doi:10.46810/tdfd.1225756
Chicago İnik, Özkan. “Classification of Scenes in Aerial Images With Deep Learning Models”. Türk Doğa Ve Fen Dergisi 12, sy. 1 (Mart 2023): 37-43. https://doi.org/10.46810/tdfd.1225756.
EndNote İnik Ö (01 Mart 2023) Classification of Scenes in Aerial Images with Deep Learning Models. Türk Doğa ve Fen Dergisi 12 1 37–43.
IEEE Ö. İnik, “Classification of Scenes in Aerial Images with Deep Learning Models”, TDFD, c. 12, sy. 1, ss. 37–43, 2023, doi: 10.46810/tdfd.1225756.
ISNAD İnik, Özkan. “Classification of Scenes in Aerial Images With Deep Learning Models”. Türk Doğa ve Fen Dergisi 12/1 (Mart 2023), 37-43. https://doi.org/10.46810/tdfd.1225756.
JAMA İnik Ö. Classification of Scenes in Aerial Images with Deep Learning Models. TDFD. 2023;12:37–43.
MLA İnik, Özkan. “Classification of Scenes in Aerial Images With Deep Learning Models”. Türk Doğa Ve Fen Dergisi, c. 12, sy. 1, 2023, ss. 37-43, doi:10.46810/tdfd.1225756.
Vancouver İnik Ö. Classification of Scenes in Aerial Images with Deep Learning Models. TDFD. 2023;12(1):37-43.