Araştırma Makalesi
BibTex RIS Kaynak Göster

KÇ3B-ESA: Hiperspektral Görüntü Sınıflandırması için Yeni 3B Evrişimli Sinir Ağı ve Uzaktan Algılama Uygulaması

Yıl 2020, Ejosat Özel Sayı 2020 (ICCEES), 65 - 71, 05.10.2020
https://doi.org/10.31590/ejosat.802890

Öz

Hiperspektral Görüntüleme (HSG) uzamsal ve spektral bilgiyi içeren yüzlerce banttan oluşur. HSG verileri sınıflandırılırken uzamsal özelliklerin yanında spektral özelliklerinde elde edilmesi büyük önem taşır. Bu çalışmada hem uzamsal hem de spektral bilgilerin elde edilmesi için yeni bir derin öğrenme modeli önerilmiştir. Öncelikle, HSG verilerinin boyutlarının büyük olmasından dolayı tüm verilere Temel Bileşen Analizi (TBA) uygulanarak uzamsal boyut değişmeyecek şekilde spektral boyut küçültülmüştür. Daha sonra yeni bir yöntem olan, HSG verilerinin sınıflandırıldığı çalışmalarda yer alan, Komşuluk Çıkarımı (KÇ) yöntemi kullanılmıştır. Bu yöntem ile tüm pikselleri tarayacak şekilde mini küpler oluşturularak örnek sayısı artırılmıştır. Son olarak oluşturulan bu küpler 3B konvolüsyon katmanlarının bulunduğu 3B-Evrişimli Sinir Ağı (3B-ESA) modeli ile eğitilmiştir. Bu sayede daha anlamlı özelliklerin elde edilmesi sağlanmıştır. Önerilen modeli test etmek için Indian Pines (IP), Salinas Scene (SA) ve Pavia University (PU) uzaktan algılama veri setleri kullanılarak HSG sınıflandırma deneyleri yürütülmüştür. Yürütülen bu deneyler sonucunda tüm veri setleri için genel doğruluk (GD), kappa katsayısı (KC) ve ortalama doğruluk (OD) değerleri hesaplanarak sınıflandırma performansı değerlendirilmiştir. Sınıflandırma işlemi sonucunda IP veri seti için %99.10 GD, %98.97 KC, %96.23 OD; SA veri seti için %100 GD, %100 KC, %100 OD; ve son olarak PU veri seti için %99.90 GD, %99.87 KC, %99.67 OD doğruluk oranları elde edilmiştir. Daha sonra bu sonuçlar gelişmiş derin öğrenme tabanlı metotlarla karşılaştırılarak, önerilen KÇ3B-ESA modelinin çok daha iyi bir performans gösterdiği kanıtlanmıştır.

Kaynakça

  • Camps-Valls, G., & Bruzzone, L. (2005). Kernel-based methods for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 43(6), 1351-1362.
  • Chu, W., & Cai, D. (2018). Deep feature based contextual model for object detection. Neurocomputing, 275, 1035-1042.
  • Cihan, M. (2020). Hiperspektral Görüntüleme Yöntemi Kullanılarak Yenidoğan Sağlık Durumlarının Derin Öğrenme Metotları ile Sınıflandırılması. Yayımlanmış Yüksek Lisans Tezi, Konya Teknik Üniversitesi, Konya.
  • Gidaris, S., & Komodakis, N. (2015). Object detection via a multi-region and semantic segmentation-aware cnn model. In Proceedings of the IEEE international conference on computer vision, 1134-1142.
  • Ham, J., Chen, Y., Crawford, M. M., & Ghosh, J. (2005). Investigation of the random forest framework for classification of hyperspectral data. IEEE Transactions on Geoscience and Remote Sensing, 43(3), 492-501.
  • Hamida, A. B., Benoit, A., Lambert, P., & Amar, C. B. (2018). 3-D deep learning approach for remote sensing image classification. IEEE Transactions on geoscience and remote sensing, 56(8), 4420-4434.
  • He, M., Li, B., & Chen, H. (2017). Multi-scale 3D deep convolutional neural network for hyperspectral image classification. In 2017 IEEE International Conference on Image Processing (ICIP), 3904-3908.
  • Huang, K., Li, S., Kang, X., & Fang, L. (2016). Spectral–spatial hyperspectral image classification based on KNN. Sensing and Imaging, 17(1), 1.
  • Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. ArXiv preprint arXiv:1412.6980.
  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097-1105.
  • Lawrence, R. L., Wood, S. D., & Sheley, R. L. (2006). Mapping invasive plants using hyperspectral imagery and Breiman Cutler classifications (RandomForest). Remote Sensing of Environment, 100(3), 356-362.
  • Lee, H., & Kwon, H. (2016). Contextual deep CNN based hyperspectral classification. In 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 3322-3325.
  • Li, Y., Xie, W., & Li, H. (2017). Hyperspectral image reconstruction by deep convolutional neural network for classification. Pattern Recognition, 63, 371-383.
  • Liu, F., Shen, C., & Lin, G. (2015). Deep convolutional neural fields for depth estimation from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5162-5170.
  • Liu, W., Wen, Y., Yu, Z., & Yang, M. (2016, June). Large-margin softmax loss for convolutional neural networks. In ICML, 2(3), 7. Ma, L., Crawford, M. M., & Tian, J. (2010). Local manifold learning-based k-nearest-neighbor for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 48(11), 4099-4109.
  • Makantasis, K., Karantzalos, K., Doulamis, A., & Doulamis, N. (2015). Deep supervised learning for hyperspectral data classification through convolutional neural networks. In 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 4959-4962.
  • Melgani, F., & Bruzzone, L. (2004). Classification of hyperspectral remote sensing images with support vector machines. IEEE Transactions on geoscience and remote sensing, 42(8), 1778-1790.
  • Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, 91-99.
  • Roy, S. K., Krishna, G., Dubey, S. R., & Chaudhuri, B. B. (2019). HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geoscience and Remote Sensing Letters, 17(2), 277-281.
  • Saba, T., Khan, M. A., Rehman, A., & Marie-Sainte, S. L. (2019). Region extraction and classification of skin cancer: A heterogeneous framework of deep CNN features fusion and reduction. Journal of medical systems, 43(9), 289.
  • Wang, J., Yang, Y., Mao, J., Huang, Z., Huang, C., & Xu, W. (2016). Cnn-rnn: A unified framework for multi-label image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2285-2294.
  • Zhao, W., & Du, S. (2016). Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Transactions on Geoscience and Remote Sensing, 54(8), 4544-4554.
  • Zhong, Z., Li, J., Luo, Z., & Chapman, M. (2017). Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Transactions on Geoscience and Remote Sensing, 56(2), 847-858.

NE3D-CNN: A New 3D Convolutional Neural Network for Hyperspectral Image Classification and Remote Sensing Application

Yıl 2020, Ejosat Özel Sayı 2020 (ICCEES), 65 - 71, 05.10.2020
https://doi.org/10.31590/ejosat.802890

Öz

Hyperspectral Imaging (HSI) consists of hundreds of bands containing spatial and spectral information. When classifying HSI data, it is of great importance to obtain spectral features as well as spatial features. In this study, a new deep learning model is proposed to obtain both spatial and spectral information. First of all, due to the large size of HSI data, Principal Component Analysis (PCA) was applied to all data and the spectral size was reduced so that the spatial dimension would not change. Then, Neighbourhood Extraction (NE) method, which is a new method used in studies in which HSI data were classified, was used. With the method, the number of samples was increased by creating mini cubes to scan all pixels. Finally, the cubes were trained with the 3D-Convolutional Neural Network (3D-CNN) model, which has 3D convolution layers. In this way, more meaningful features were obtained. HSI classification experiments were conducted using Indian Pines (IP), Salinas Scene (SA) and Pavia University (PU) remote sensing datasets to test the proposed model. As a result of the experiments, the classification performance was evaluated by calculating the overall accuracy (OA), kappa coefficient (KC) and average accuracy (AA) values for all data sets. At the end of the classification process, accuracy rates of 99.10% OA, 98.97% KC, 96.23% AA for the IP data set, 100% OA, 100% KC, 100% AA for SA data set, and finally 99.90% OA, 99.87% KC, 99.67% AA for the PU data set were obtained. Later, by comparing the results with state-of-the-art deep learning-based methods, it has been proven that the proposed NE3D-CNN model gives a much better performance.

Kaynakça

  • Camps-Valls, G., & Bruzzone, L. (2005). Kernel-based methods for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 43(6), 1351-1362.
  • Chu, W., & Cai, D. (2018). Deep feature based contextual model for object detection. Neurocomputing, 275, 1035-1042.
  • Cihan, M. (2020). Hiperspektral Görüntüleme Yöntemi Kullanılarak Yenidoğan Sağlık Durumlarının Derin Öğrenme Metotları ile Sınıflandırılması. Yayımlanmış Yüksek Lisans Tezi, Konya Teknik Üniversitesi, Konya.
  • Gidaris, S., & Komodakis, N. (2015). Object detection via a multi-region and semantic segmentation-aware cnn model. In Proceedings of the IEEE international conference on computer vision, 1134-1142.
  • Ham, J., Chen, Y., Crawford, M. M., & Ghosh, J. (2005). Investigation of the random forest framework for classification of hyperspectral data. IEEE Transactions on Geoscience and Remote Sensing, 43(3), 492-501.
  • Hamida, A. B., Benoit, A., Lambert, P., & Amar, C. B. (2018). 3-D deep learning approach for remote sensing image classification. IEEE Transactions on geoscience and remote sensing, 56(8), 4420-4434.
  • He, M., Li, B., & Chen, H. (2017). Multi-scale 3D deep convolutional neural network for hyperspectral image classification. In 2017 IEEE International Conference on Image Processing (ICIP), 3904-3908.
  • Huang, K., Li, S., Kang, X., & Fang, L. (2016). Spectral–spatial hyperspectral image classification based on KNN. Sensing and Imaging, 17(1), 1.
  • Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. ArXiv preprint arXiv:1412.6980.
  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, 1097-1105.
  • Lawrence, R. L., Wood, S. D., & Sheley, R. L. (2006). Mapping invasive plants using hyperspectral imagery and Breiman Cutler classifications (RandomForest). Remote Sensing of Environment, 100(3), 356-362.
  • Lee, H., & Kwon, H. (2016). Contextual deep CNN based hyperspectral classification. In 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 3322-3325.
  • Li, Y., Xie, W., & Li, H. (2017). Hyperspectral image reconstruction by deep convolutional neural network for classification. Pattern Recognition, 63, 371-383.
  • Liu, F., Shen, C., & Lin, G. (2015). Deep convolutional neural fields for depth estimation from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition, 5162-5170.
  • Liu, W., Wen, Y., Yu, Z., & Yang, M. (2016, June). Large-margin softmax loss for convolutional neural networks. In ICML, 2(3), 7. Ma, L., Crawford, M. M., & Tian, J. (2010). Local manifold learning-based k-nearest-neighbor for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 48(11), 4099-4109.
  • Makantasis, K., Karantzalos, K., Doulamis, A., & Doulamis, N. (2015). Deep supervised learning for hyperspectral data classification through convolutional neural networks. In 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 4959-4962.
  • Melgani, F., & Bruzzone, L. (2004). Classification of hyperspectral remote sensing images with support vector machines. IEEE Transactions on geoscience and remote sensing, 42(8), 1778-1790.
  • Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information processing systems, 91-99.
  • Roy, S. K., Krishna, G., Dubey, S. R., & Chaudhuri, B. B. (2019). HybridSN: Exploring 3-D–2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geoscience and Remote Sensing Letters, 17(2), 277-281.
  • Saba, T., Khan, M. A., Rehman, A., & Marie-Sainte, S. L. (2019). Region extraction and classification of skin cancer: A heterogeneous framework of deep CNN features fusion and reduction. Journal of medical systems, 43(9), 289.
  • Wang, J., Yang, Y., Mao, J., Huang, Z., Huang, C., & Xu, W. (2016). Cnn-rnn: A unified framework for multi-label image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2285-2294.
  • Zhao, W., & Du, S. (2016). Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Transactions on Geoscience and Remote Sensing, 54(8), 4544-4554.
  • Zhong, Z., Li, J., Luo, Z., & Chapman, M. (2017). Spectral–spatial residual network for hyperspectral image classification: A 3-D deep learning framework. IEEE Transactions on Geoscience and Remote Sensing, 56(2), 847-858.
Toplam 23 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Mücahit Cihan 0000-0002-1426-319X

Murat Ceylan Bu kişi benim 0000-0001-6503-9668

Yayımlanma Tarihi 5 Ekim 2020
Yayımlandığı Sayı Yıl 2020 Ejosat Özel Sayı 2020 (ICCEES)

Kaynak Göster

APA Cihan, M., & Ceylan, M. (2020). KÇ3B-ESA: Hiperspektral Görüntü Sınıflandırması için Yeni 3B Evrişimli Sinir Ağı ve Uzaktan Algılama Uygulaması. Avrupa Bilim Ve Teknoloji Dergisi65-71. https://doi.org/10.31590/ejosat.802890