Research Article
BibTex RIS Cite

Depthwise Separable Convolution Based Residual Network Architecture for Hyperspectral Image Classification

Year 2022, Volume: 10 Issue: 2, 242 - 258, 30.06.2022
https://doi.org/10.29109/gujsc.1055942

Abstract

Hyperspectral remote sensing images (HRSI) are 3D image cubes containing hundreds of spectral bands and having two spatial-one spectral dimensions. Classification is one of the most popular topics at HRSI. In recent years, many deep learning methods have been proposed for HRSI classification. Especially Convolutional Neural Networks (CNN) are commonly used in the classification of HRSIs. CNN has a strong feature learning capability, which can provide more distinctive features for higher quality HRSI classification. In this study, a method in which 3D/2D CNN, Residual network architecture (ResNet) and Depthwise separable convolution (DSC) are used together is proposed. In deeper CNNs, ResNet is used to achieve higher classification performance as the number of layers increases. In addition, thanks to ResNet, problems such as degradation and vanishing gradients that may occur in deep networks are overcome. On the other hand, DSCs have been used, which reduces the computational cost, prevents overfitting and provides more spatial feature extraction. Finally, spatial-spectral features are extracted simultaneously from HRSIs with 3D CNN. However, using only 3D CNN increases computational complexity. By using only 2D CNN, only spatial features are extracted from HRSIs. Spectral features cannot be extracted. By using 3D CNN and 2D CNN together, these two problems are solved. In addition, principal component analysis is used as a preprocessing step for optimum spectral band extraction in the proposed method. Applications were carried out using two popular HUAG benchmarking datasets, Indian pines and Salinas datasets. As a result of the applications, an overall accuracy of 99.45% with Indian pines and 99.95% with Salinas was obtained. The obtained classification results show that the classification performance of the proposed method is better than the existing methods.

References

  • [1] C. Chen et al., “Hyperspectral classification based on spectral–spatial convolutional neural networks,” Eng. Appl. Artif. Intell., vol. 68, no. October 2017, pp. 165–171, 2018, doi: 10.1016/j.engappai.2017.10.015.
  • [2] H. Fırat and D. Hanbay, “4CF-Net: Hiperspektral uzaktan algılama görüntülerinin spektral uzamsal sınıflandırılması için yeni 3B evrişimli sinir ağı,” Gazi Üniversitesi Mühendislik-Mimarlık Fakültesi Derg., vol. 1, pp. 439–453, 2021, doi: 10.17341/gazimmfd.901291.
  • [3] S. K. Roy, S. Chatterjee, S. Bhattacharyya, B. B. Chaudhuri, and J. Platos, “Lightweight Spectral-Spatial Squeeze-and- Excitation Residual Bag-of-Features Learning for Hyperspectral Classification,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 8, pp. 5277–5290, 2020, doi: 10.1109/TGRS.2019.2961681.
  • [4] H. Firat, M. E. Asker, and D. Hanbay, “Classification of hyperspectral remote sensing images using different dimension reduction methods with 3D/2D CNN,” Remote Sens. Appl. Soc. Environ., p. 100694, 2022, doi: 10.1016/j.rsase.2022.100694.
  • [5] J. Li, J. M. Bioucas-Dias, and A. Plaza, “Semisupervised hyperspectral image segmentation using multinomial logistic regression with active learning,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 11, pp. 4085–4098, 2010, doi: 10.1109/TGRS.2010.2060550.
  • [6] Y. Wang, W. Yu, and Z. Fang, “Multiple Kernel-based SVM classification of hyperspectral images by combining spectral, spatial, and semantic information,” Remote Sens., vol. 12, no. 1, 2020, doi: 10.3390/RS12010120.
  • [7] J. S. Ham, Y. Chen, M. M. Crawford, and J. Ghosh, “Investigation of the random forest framework for classification of hyperspectral data,” IEEE Trans. Geosci. Remote Sens., vol. 43, no. 3, pp. 492–501, 2005, doi: 10.1109/TGRS.2004.842481.
  • [8] Y. Li, H. Zhang, and Q. Shen, “Spectral-spatial classification of hyperspectral imagery with 3D convolutional neural network,” Remote Sens., vol. 9, no. 1, 2017, doi: 10.3390/rs9010067.
  • [9] A. Mohan and M. Venkatesan, “HybridCNN based hyperspectral image classification using multiscale spatiospectral features,” Infrared Phys. Technol., vol. 108, no. March, 2020, doi: 10.1016/j.infrared.2020.103326.
  • [10] H. Üzen, M. Turkoglu, M. Aslan, and D. Hanbay, “Depth-wise Squeeze and Excitation Block-based Efficient-Unet model for surface defect detection,” Vis. Comput., 2022, doi: 10.1007/s00371-022-02442-0.
  • [11] C. Zhao, X. Wan, G. Zhao, B. Cui, W. Liu, and B. Qi, “Spectral-Spatial Classification of Hyperspectral Imagery Based on Stacked Sparse Autoencoder and Random Forest,” Eur. J. Remote Sens., vol. 50, no. 1, pp. 47–63, 2017, doi: 10.1080/22797254.2017.1274566.
  • [12] H. Data et al., “Deep Learning-Based Classi fi cation of Hyperspectral Data,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 7, no. 6, pp. 2094–2107, 2014, doi: 10.1109/JSTARS.2014.2329330.
  • [13] A. Mughees and L. Tao, “Efficient deep auto-encoder learning for the classification of hyperspectral images,” Proc. - 2016 Int. Conf. Virtual Real. Vis. ICVRV 2016, no. September, pp. 44–51, 2017, doi: 10.1109/ICVRV.2016.16.
  • [14] Y. Chen, X. Zhao, and X. Jia, “Spectral-Spatial Classification of Hyperspectral Data Based on Deep Belief Network,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 8, no. 6, pp. 2381–2392, 2015, doi: 10.1109/JSTARS.2015.2388577.
  • [15] J. Li, B. Xi, Y. Li, Q. Du, and K. Wang, “Hyperspectral classification based on texture feature enhancement and deep belief networks,” Remote Sens., vol. 10, no. 3, 2018, doi: 10.3390/rs10030396.
  • [16] C. Zhang et al., “Joint Deep Learning for land cover and land use classification,” Remote Sens. Environ., vol. 221, no. May 2018, pp. 173–187, 2019, doi: 10.1016/j.rse.2018.11.014.
  • [17] H. Firat, M. Uçan, and D. Hanbay, “Classification of Hyperspectral Remote Sensing Images Using Hybrid 3D-2D CNN Architecture,” J. Comput. Sci., vol. IDAP-2021, no. Special, pp. 132–140, 2021.
  • [18] C. Mu, Z. Guo, and Y. Liu, “A multi-scale and multi-level spectral-spatial feature fusion network for hyperspectral image classification,” Remote Sens., vol. 12, no. 1, 2020, doi: 10.3390/RS12010125.
  • [19] Z. Meng, L. Li, X. Tang, Z. Feng, L. Jiao, and M. Liang, “Multipath residual network for spectral-spatial hyperspectral image classification,” Remote Sens., vol. 11, no. 16, pp. 1–19, 2019, doi: 10.3390/rs11161896.
  • [20] L. Song, W.; Li, S.; Fang, “Hyperspectral Image Classification with Deep Feature Fusion Network,” IEEE Trans. Geosci. Remote Sens., vol. 99, pp. 3173–3184, 2018, doi: 10.1109/IGARSS.2019.8898520.
  • [21] Z. Zhong, J. Li, Z. Luo, and M. Chapman, “Spectral-Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework,” IEEE Trans. Geosci. Remote Sens., vol. 56, no. 2, pp. 847–858, 2018, doi: 10.1109/TGRS.2017.2755542.
  • [22] S. K. Roy, G. Krishna, S. R. Dubey, and B. B. Chaudhuri, “HybridSN: Exploring 3D-2D CNN Feature Hierarchy for Hyperspectral Image Classification,” arXiv, vol. 17, no. 2, pp. 277–281, 2019.
  • [23] M. Ahmad, A. M. Khan, M. Mazzara, S. Distefano, M. Ali, and M. S. Sarfraz, “A Fast and Compact 3-D CNN for Hyperspectral Image Classification,” IEEE Geosci. Remote Sens. Lett., no. April, pp. 1–5, 2020, doi: 10.1109/LGRS.2020.3043710.
  • [24] Z. Ge, G. Cao, X. Li, and P. Fu, “Hyperspectral Image Classification Method Based on 2D-3D CNN and Multibranch Feature Fusion,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 13, pp. 5776–5788, 2020, doi: 10.1109/JSTARS.2020.3024841.
  • [25] H. C. Mingyi He, Bo Li, “Multi-scale 3D deep convolutional neural network for hyperspectral image classification,” 2017 IEEE Int. Conf. Image Process., pp. 3904–3908, 2017.
  • [26] L. Dang, P. Pang, and J. Lee, “Depth-wise separable convolution neural network with residual connection for hyperspectral image classification,” Remote Sens., vol. 12, no. 20, pp. 1–20, 2020, doi: 10.3390/rs12203408.
  • [27] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, vol. 2016-Decem, pp. 770–778, doi: 10.1109/CVPR.2016.90.
  • [28] M. E. Paoletti, J. M. Haut, J. Plaza, and A. Plaza, “Deep learning classifiers for hyperspectral imaging: A review,” ISPRS J. Photogramm. Remote Sens., vol. 158, no. September, pp. 279–317, 2019, doi: 10.1016/j.isprsjprs.2019.09.006.
  • [29] B. C. Kuo, H. H. Ho, C. H. Li, C. C. Hung, and J. S. Taur, “A kernel-based feature selection method for SVM with RBF kernel for hyperspectral image classification,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 7, no. 1, pp. 317–326, 2014, doi: 10.1109/JSTARS.2013.2262926.
  • [30] Q. Wang, J. Gao, and Y. Yuan, “A Joint Convolutional Neural Networks and Context Transfer for Street Scenes Labeling,” IEEE Trans. Intell. Transp. Syst., vol. 19, no. 5, pp. 1457–1470, 2018, doi: 10.1109/TITS.2017.2726546.

Hiperspektral Görüntü Sınıflandırması için Derinlemesine Ayrılabilir Evrişim Tabanlı Artık Ağ Mimarisi

Year 2022, Volume: 10 Issue: 2, 242 - 258, 30.06.2022
https://doi.org/10.29109/gujsc.1055942

Abstract

Hiperspektral uzaktan algılama görüntüleri (HUAG), yüzlerce spektral bant içeren ve iki uzamsal-bir spektral boyuta sahip 3B görüntü küpleridir. Sınıflandırma, HUAG’de en popüler konulardan biridir. Son yıllarda HUAG sınıflandırması için çok sayıda derin öğrenme yöntemi önerilmiştir. Özellikle Evrişimli Sinir Ağları (ESA), HUAG'lerin sınıflandırılmasında yaygın olarak kullanılmaktadır. ESA, daha yüksek kaliteli HUAG sınıflandırması için daha ayırt edici özellikler sağlayabilen güçlü bir özellik öğrenme yeteneğine sahiptir. Bu çalışma kapsamında 3B/2B ESA, Artık ağ mimarisi ve Derinlemesine ayrılabilir evrişimin birlikte kullanıldığı bir yöntem önerilmiştir. Daha derin ESA'larda, katman sayısı arttıkça daha yüksek sınıflandırma performansı elde etmek için artık ağ kullanılmaktadır. Ayrıca artık ağ sayesinde derin ağlarda oluşabilecek bozulma ve gradyanların yok olması gibi sorunların üstesinden gelinmektedir. Öte yandan, hesaplama maliyetini azaltan, aşırı öğrenmeyi önleyen ve daha fazla uzamsal özellik çıkarımı sağlayan Derinlemesine ayrılabilir evrişimler kullanılmıştır. Son olarak, 3B ESA ile HUAG’lerden uzamsal-spektral özellikler eş zamanlı olarak çıkarılmaktadır. Ancak sadece 3B ESA kullanımı hesaplama karmaşıklığını arttırmaktadır. Yalnızca 2B ESA kullanımı ile de HUAG’lerden sadece uzamsal özellikler çıkarılmaktadır. Spektral özellikler çıkarılamamaktadır. 3B ESA ile 2B ESA’nın birlikte kullanılmasıyla bu iki problem çözülmüştür. Ayrıca önerilen yöntemde optimum spektral bant çıkarımı için temel bileşen analizi bir ön işleme adımı olarak kullanılmıştır. Popüler iki HUAG kıyaslama veriseti olan Indian pines ve Salinas verisetleri kullanılarak uygulamalar gerçekleştirilmiştir. Uygulamalar sonucunda Indian pines ile %99.45 ve Salinas ile %99.95 genel doğruluk sonucu elde edilmiştir. Elde edilen sınıflandırma sonuçları, önerilen yöntemin sınıflandırma performansının mevcut yöntemlerden daha iyi olduğunu göstermektedir.

References

  • [1] C. Chen et al., “Hyperspectral classification based on spectral–spatial convolutional neural networks,” Eng. Appl. Artif. Intell., vol. 68, no. October 2017, pp. 165–171, 2018, doi: 10.1016/j.engappai.2017.10.015.
  • [2] H. Fırat and D. Hanbay, “4CF-Net: Hiperspektral uzaktan algılama görüntülerinin spektral uzamsal sınıflandırılması için yeni 3B evrişimli sinir ağı,” Gazi Üniversitesi Mühendislik-Mimarlık Fakültesi Derg., vol. 1, pp. 439–453, 2021, doi: 10.17341/gazimmfd.901291.
  • [3] S. K. Roy, S. Chatterjee, S. Bhattacharyya, B. B. Chaudhuri, and J. Platos, “Lightweight Spectral-Spatial Squeeze-and- Excitation Residual Bag-of-Features Learning for Hyperspectral Classification,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 8, pp. 5277–5290, 2020, doi: 10.1109/TGRS.2019.2961681.
  • [4] H. Firat, M. E. Asker, and D. Hanbay, “Classification of hyperspectral remote sensing images using different dimension reduction methods with 3D/2D CNN,” Remote Sens. Appl. Soc. Environ., p. 100694, 2022, doi: 10.1016/j.rsase.2022.100694.
  • [5] J. Li, J. M. Bioucas-Dias, and A. Plaza, “Semisupervised hyperspectral image segmentation using multinomial logistic regression with active learning,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 11, pp. 4085–4098, 2010, doi: 10.1109/TGRS.2010.2060550.
  • [6] Y. Wang, W. Yu, and Z. Fang, “Multiple Kernel-based SVM classification of hyperspectral images by combining spectral, spatial, and semantic information,” Remote Sens., vol. 12, no. 1, 2020, doi: 10.3390/RS12010120.
  • [7] J. S. Ham, Y. Chen, M. M. Crawford, and J. Ghosh, “Investigation of the random forest framework for classification of hyperspectral data,” IEEE Trans. Geosci. Remote Sens., vol. 43, no. 3, pp. 492–501, 2005, doi: 10.1109/TGRS.2004.842481.
  • [8] Y. Li, H. Zhang, and Q. Shen, “Spectral-spatial classification of hyperspectral imagery with 3D convolutional neural network,” Remote Sens., vol. 9, no. 1, 2017, doi: 10.3390/rs9010067.
  • [9] A. Mohan and M. Venkatesan, “HybridCNN based hyperspectral image classification using multiscale spatiospectral features,” Infrared Phys. Technol., vol. 108, no. March, 2020, doi: 10.1016/j.infrared.2020.103326.
  • [10] H. Üzen, M. Turkoglu, M. Aslan, and D. Hanbay, “Depth-wise Squeeze and Excitation Block-based Efficient-Unet model for surface defect detection,” Vis. Comput., 2022, doi: 10.1007/s00371-022-02442-0.
  • [11] C. Zhao, X. Wan, G. Zhao, B. Cui, W. Liu, and B. Qi, “Spectral-Spatial Classification of Hyperspectral Imagery Based on Stacked Sparse Autoencoder and Random Forest,” Eur. J. Remote Sens., vol. 50, no. 1, pp. 47–63, 2017, doi: 10.1080/22797254.2017.1274566.
  • [12] H. Data et al., “Deep Learning-Based Classi fi cation of Hyperspectral Data,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 7, no. 6, pp. 2094–2107, 2014, doi: 10.1109/JSTARS.2014.2329330.
  • [13] A. Mughees and L. Tao, “Efficient deep auto-encoder learning for the classification of hyperspectral images,” Proc. - 2016 Int. Conf. Virtual Real. Vis. ICVRV 2016, no. September, pp. 44–51, 2017, doi: 10.1109/ICVRV.2016.16.
  • [14] Y. Chen, X. Zhao, and X. Jia, “Spectral-Spatial Classification of Hyperspectral Data Based on Deep Belief Network,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 8, no. 6, pp. 2381–2392, 2015, doi: 10.1109/JSTARS.2015.2388577.
  • [15] J. Li, B. Xi, Y. Li, Q. Du, and K. Wang, “Hyperspectral classification based on texture feature enhancement and deep belief networks,” Remote Sens., vol. 10, no. 3, 2018, doi: 10.3390/rs10030396.
  • [16] C. Zhang et al., “Joint Deep Learning for land cover and land use classification,” Remote Sens. Environ., vol. 221, no. May 2018, pp. 173–187, 2019, doi: 10.1016/j.rse.2018.11.014.
  • [17] H. Firat, M. Uçan, and D. Hanbay, “Classification of Hyperspectral Remote Sensing Images Using Hybrid 3D-2D CNN Architecture,” J. Comput. Sci., vol. IDAP-2021, no. Special, pp. 132–140, 2021.
  • [18] C. Mu, Z. Guo, and Y. Liu, “A multi-scale and multi-level spectral-spatial feature fusion network for hyperspectral image classification,” Remote Sens., vol. 12, no. 1, 2020, doi: 10.3390/RS12010125.
  • [19] Z. Meng, L. Li, X. Tang, Z. Feng, L. Jiao, and M. Liang, “Multipath residual network for spectral-spatial hyperspectral image classification,” Remote Sens., vol. 11, no. 16, pp. 1–19, 2019, doi: 10.3390/rs11161896.
  • [20] L. Song, W.; Li, S.; Fang, “Hyperspectral Image Classification with Deep Feature Fusion Network,” IEEE Trans. Geosci. Remote Sens., vol. 99, pp. 3173–3184, 2018, doi: 10.1109/IGARSS.2019.8898520.
  • [21] Z. Zhong, J. Li, Z. Luo, and M. Chapman, “Spectral-Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework,” IEEE Trans. Geosci. Remote Sens., vol. 56, no. 2, pp. 847–858, 2018, doi: 10.1109/TGRS.2017.2755542.
  • [22] S. K. Roy, G. Krishna, S. R. Dubey, and B. B. Chaudhuri, “HybridSN: Exploring 3D-2D CNN Feature Hierarchy for Hyperspectral Image Classification,” arXiv, vol. 17, no. 2, pp. 277–281, 2019.
  • [23] M. Ahmad, A. M. Khan, M. Mazzara, S. Distefano, M. Ali, and M. S. Sarfraz, “A Fast and Compact 3-D CNN for Hyperspectral Image Classification,” IEEE Geosci. Remote Sens. Lett., no. April, pp. 1–5, 2020, doi: 10.1109/LGRS.2020.3043710.
  • [24] Z. Ge, G. Cao, X. Li, and P. Fu, “Hyperspectral Image Classification Method Based on 2D-3D CNN and Multibranch Feature Fusion,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 13, pp. 5776–5788, 2020, doi: 10.1109/JSTARS.2020.3024841.
  • [25] H. C. Mingyi He, Bo Li, “Multi-scale 3D deep convolutional neural network for hyperspectral image classification,” 2017 IEEE Int. Conf. Image Process., pp. 3904–3908, 2017.
  • [26] L. Dang, P. Pang, and J. Lee, “Depth-wise separable convolution neural network with residual connection for hyperspectral image classification,” Remote Sens., vol. 12, no. 20, pp. 1–20, 2020, doi: 10.3390/rs12203408.
  • [27] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, vol. 2016-Decem, pp. 770–778, doi: 10.1109/CVPR.2016.90.
  • [28] M. E. Paoletti, J. M. Haut, J. Plaza, and A. Plaza, “Deep learning classifiers for hyperspectral imaging: A review,” ISPRS J. Photogramm. Remote Sens., vol. 158, no. September, pp. 279–317, 2019, doi: 10.1016/j.isprsjprs.2019.09.006.
  • [29] B. C. Kuo, H. H. Ho, C. H. Li, C. C. Hung, and J. S. Taur, “A kernel-based feature selection method for SVM with RBF kernel for hyperspectral image classification,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 7, no. 1, pp. 317–326, 2014, doi: 10.1109/JSTARS.2013.2262926.
  • [30] Q. Wang, J. Gao, and Y. Yuan, “A Joint Convolutional Neural Networks and Context Transfer for Street Scenes Labeling,” IEEE Trans. Intell. Transp. Syst., vol. 19, no. 5, pp. 1457–1470, 2018, doi: 10.1109/TITS.2017.2726546.
There are 30 citations in total.

Details

Primary Language Turkish
Subjects Engineering
Journal Section Tasarım ve Teknoloji
Authors

Hüseyin Fırat 0000-0002-1257-8518

Mehmet Emin Asker 0000-0003-4585-4168

Davut Hanbay 0000-0003-2271-7865

Publication Date June 30, 2022
Submission Date January 10, 2022
Published in Issue Year 2022 Volume: 10 Issue: 2

Cite

APA Fırat, H., Asker, M. E., & Hanbay, D. (2022). Hiperspektral Görüntü Sınıflandırması için Derinlemesine Ayrılabilir Evrişim Tabanlı Artık Ağ Mimarisi. Gazi Üniversitesi Fen Bilimleri Dergisi Part C: Tasarım Ve Teknoloji, 10(2), 242-258. https://doi.org/10.29109/gujsc.1055942

                                TRINDEX     16167        16166    21432    logo.png

      

    e-ISSN:2147-9526