Araştırma Makalesi
BibTex RIS Kaynak Göster

Classification of Distortions in Agricultural Images Using Convolutional Neural Network

Yıl 2023, Cilt: 9 Sayı: 2, 174 - 182, 31.08.2023

Öz

Monitoring products is important for quality and ripening control in an efficient agricultural production process. Monitoring is mostly done with captured images and videos in accordance with the developed technology. The quality of these images and videos directly affects the evaluation. If there is a distortion in image or video, first of all, this distortion must be detected and classified to eliminate. In this study, a method is presented to classify distortions in agricultural images. Eleven different distortions are synthetically added to agricultural images. A convolutional neural network (CNN) is designed to classify distorted images. The designed CNN model is tested with four different datasets obtained from various agricultural fields. Also the designed CNN model is compared with previously presented CNN architectures. The results are evaluated and it is seen that the designed CNN model successfully classifies distortions.

Kaynakça

  • [1] A. Chetouani, A. Beghdadi and M. Deriche, “A hybrid system for distortion classification and image quality evaluation,” Signal Processing: Image Communication, vol. 27, no. 9, pp. 948-960, October 2012. doi:10.1016/j.image.2012.06.001
  • [2] J.-Y. Lee and Y.-J. Kim, “Optimal image quality assessment based on distortion classification and color perception,” KSII Transactions on Internet and Information Systems, vol. 10, no. 1, pp. 257-271, January 2016. doi:10.3837/tiis.2016.01.015
  • [3] O. Alaql, K. Ghazinour and C. C. Lu, “Classification of image distortions for image quality assessment,” in Proc. of International Conference on Computational Science and Computational Intelligence, 15-17 December 2016, Las Vegas, NV, USA [Online]. Available: IEEE Xplore, https://ieeexplore.ieee.org/document/7881422. [Accessed: 20 Sept. 2022].
  • [4] H. Wang, L. Zuo and J. Fu, “Distortion recognition for image quality assessment with convolutional neural network,” in Proc. of IEEE International Conference on Multimedia and Expo, 11- 15 July 2016, Seattle, WA, USA [Online]. Available: IEEE Xplore, https://ieeexplore.ieee.org/abstract/document/7552936. [Accessed: 20 Sept. 2022].
  • [5] H. Al-Bandawi and G. Deng, “Classification of image distortion based on the generalized Benford’s law,” Multimedia Tools and Applications, vol. 78, pp. 25611-25628, May 2019. doi:10.1007/s11042-019-7668-3
  • [6] M. Ha, Y. Byun, J. Kim, J. Lee, Y. Lee and S. Lee, “Selective deep convolutional neural network for low cost distorted image classification,” IEEE Access, vol 7, pp. 133030-133042, September 2019. doi:10.1109/ACCESS.2019.2939781
  • [7] O. Messai, F. Hachouf and Z. A. Seghir, “Automatic distortion type recognition for stereoscopic images,” in Proc. of International Conference on Advanced Electrical Engineering, 19-21 November 2019, Algiers, Algeria [Online]. Available: IEEE Xplore, https://ieeexplore.ieee.org/document/9015082. [Accessed: 20 Sept. 2022].
  • [8] M. Buczkowski and R. Stasiński, “Convolutional neural network-based image distortion classification,” in Proc. of International Conference on Systems, Signals and Image Processing, 5-7 June 2019, Osijek, Croatia [Online]. Available: IEEE Xplore, https://ieeexplore.ieee.org/document/8787212. [Accessed: 20 Sept. 2022].
  • [9] D. Liang, X. Gao, W. Lu and L. He, “Deep multi-label learning for image distortion identification,” Signal Processing, vol. 172, p. 107536, July 2020. doi:10.1016/j.sigpro.2020.107536
  • [10] Z. A. Khan, A. Beghdadi, M. Kaaniche and F. A. Cheikh, “Residual networks based distortion classification and ranking for laparoscopic image quality assessment,” in Proc. of IEEE International Conference on Image Processing, 25-28 October 2020, Abu Dhabi, United Arab Emirates [Online]. Available: IEEE Xplore, https://ieeexplore.ieee.org/document/9191111. [Accessed: 20 Sept. 2022].
  • [11] S. Bianco, L. Celona and P. Napoletano, “Disentangling image distortions in deep feature space,” Pattern Recognition Letters, vol. 148, pp.128-135, August 2021. doi:10.1016/j.patrec.2021.05.008
  • [12] K. M. Roccapriore, N. Creange, M. Ziatdinov and S. V. Kalinin, “Identification and correction of temporal and spatial distortions in scanning transmission electron microscopy,” Ultramicroscopy, vol. 229, p. 113337, October 2021. doi:10.1016/j.ultramic.2021.113337
  • [13] Y. Zhang, D.M. Chandler and X. Mou, “Quality assessment of multiply and singly distorted stereoscopic images via adaptive construction of cyclopean views,” Signal Processing: Image Communication, vol. 94, p. 116175, May 2021. doi:10.1016/j.image.2021.116175
  • [14] C. Yan, T. Teng, Y. Liu, Y. Zhang, H. Wang and X. Ji, “Precise no-reference image quality evaluation based on distortion identification,” ACM Trans. Multimedia Comput. Commun. Appl., vol. 17, no. 3s, pp. 110:1-110:21, November 2021. doi:10.1145/3468872
  • [15] D. Liang, X. Gao, W. Lu and J. Li, “Systemic distortion analysis with deep distortion directed image quality assessment models,” Signal Processing: Image Communication, vol. 109, p. 116870, November 2022. doi:10.1016/j.image.2022.116870
  • [16] Y. Wang, H. Li and S. Hou, “Distortion detection and removal integrated method for image restoration,” Digital Signal Processing, vol. 127, p. 103528, July 2022. doi:10.1016/j.dsp.2022.103528
  • [17] H. Fazlali, S. Shirani, M. Bradford, T. Kirubarajan, “Single image rain/snow removal using distortion type information,” Multimedia Tools and Applications, vol. 81, pp. 14105–14131, February 2022. doi: 10.1007/s11042-022-12012-0
  • [18] L. Xu and X. Jiang, “Blind image quality assessment by pairwise ranking image series,” China Communications, 2023. doi:10.23919/JCC.2023.00.102
  • [19] G. Li, F. Wang, L. Zhou, S. Jin, X. Xie, C. Ding, X. Pan, W. Zhang, “MCANet: Multi-channel attention network with multi-color space encoder for underwater image classification,” Computers and Electrical Engineering, vol. 108, p. 108724, May 2023. doi:10.1016/j.compeleceng.2023.108724
  • [20] L. H. F. P. Silva, J. D. Dias Júnior, J. F. B. Santos, J. F. Mari, M. C. Escarpinati and A. R. Backes, “Classification of UAVs' distorted images using convolutional neural networks,” in Proc. of XVI Workshop de Visão Computacional, October 2020 [Online]. Available: ResearchGate,https://www.researchgate.net/publication/344775288_Classification_of_UAVs'_distorted_images_using_Convolutional_Neural_Networks. [Accessed: 12 Agu. 2022].
  • [21] L. H. F. P. Silva, J. D. Dias Júnior, J. F. B. Santos, J. F. Mari, M. C. Escarpinati and A. R. Backes, “Non-linear distortion recognition in UAVs’ images using deep learning,” in Proc. of 16th International Conference on Computer Vision Theory and Applications, January 2021 [Online]. Available: ResearchGate, https://www.researchgate.net/publication/349381713_Non-linear_Distortion_Recognition_in_UAVs'_Images_using_Deep_Learning. [Accessed: 12 Agu. 2022]. [22] Y. Lu and S. Young, “A survey of public datasets for computer vision tasks in precision agriculture,” Computers and Electronics in Agriculture, vol. 178, p. 105760, November 2020. doi:10.1016/j.compag.2020.105760
  • [23] T. M. Giselsson, R. N. Jørgensen, P. K. Jensen, M. Dyrmann and H. S. Midtiby, “A public image database for benchmark of plant seedling classification algorithms,” arXiv preprint, 15 November 2017 [Online]. Available: arXiv, https://arxiv.org/abs/1711.05458. [Accessed: 7 Sept. 2022].
  • [24] N. Häni, P. Roy and V. Isler, “MinneApple: a benchmark dataset for apple detection and segmentation,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 852-858, January 2020. doi:10.1109/LRA.2020.2965061
  • [25] I. Sa , Z. Ge, F. Dayoub, B. Upcroft, T. Perez and C. McCool, “DeepFruits: a fruit detection system using deep neural networks,” Sensors, vol. 16, no. 8, August 2016. doi:10.3390/s16081222
  • [26] M. Krestenitis, E. K. Raptis, A. Ch. Kapoutsis, K. Ioannidis, E. B. Kosmatopoulos, S. Vrochidis and I. Kompatsiaris, “CoFly-WeedDB: A UAV image dataset for weed detection and species identification,” Data in Brief, vol. 45, p. 108575, December 2022. doi:10.1016/j.dib.2022.108575
  • [27] W. Burger and M. J. Burge, Digital Image Processing: An algorithmic introduction using java, Second Edition, Springer-Verlag London, 2016, p. 756.
  • [28] W. Burger and M. J. Burge, Digital Image Processing: An algorithmic introduction using java, Second Edition, Springer-Verlag London, 2016, p. 423.
  • [29] C. C. Aggarwal, Neural Networks and Deep Learning: A textbook, Springer International Publishing AG, 2018, p. 315.
  • [30] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, November 1998. doi:10.1109/5.726791
  • [31] A. Krizhevsky, I. Sutskever and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. of NIPS, 2012 [Online]. Available: NeurIPS Proceedings, https://proceedings.neurips.cc/paper_files/paper/2012. [Accessed: 22 Dec. 2022].
  • [32] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. of ICLR, 2015 [Online]. Available: arXiv, https://arxiv.org/abs/1409.1556v6. [Accessed: 22 Dec. 2022].

Evrişimli Sinir Ağı Kullanarak Tarımsal Görüntülerdeki Bozulmaları Sınıflandırma

Yıl 2023, Cilt: 9 Sayı: 2, 174 - 182, 31.08.2023

Öz

Ürünlerin izlenmesi, etkili bir tarımsal üretim sürecinde kalite ve olgunlaşma kontrolü için önemlidir. İzleme, gelişen teknolojiye uygun olarak çoğunlukla çekilen görüntü ve videolarla yapılmaktadır. Bu görüntü ve videoların kalitesi değerlendirmeyi doğrudan etkilemektedir. Görüntü veya video da bir bozulma varsa, öncelikle bu bozulmanın ortadan kaldırılması için tespit edilmesi ve sınıflandırılması gerekmektedir. Bu çalışmada, tarımsal görüntülerdeki bozulmaları sınıflandırılmak için bir yöntem sunulmaktadır. On bir farklı bozulma tarımsal görüntülere sentetik olarak eklenmiştir. Bozuk görüntüleri sınıflandırmak için bir evrişimli sinir ağı (ESA) tasarlanmıştır. Tasarlanan ESA modeli, çeşitli tarım alanlarından elde edilen dört farklı veri seti ile test edilmiştir. Ayrıca tasarlanan ESA modeli daha önce sunulan ESA mimarileri ile karşılaştırılmıştır. Sonuçlar değerlendirilmiş ve tasarlanan ESA modelinin bozulmaları başarıyla sınıflandırdığı görülmüştür.

Kaynakça

  • [1] A. Chetouani, A. Beghdadi and M. Deriche, “A hybrid system for distortion classification and image quality evaluation,” Signal Processing: Image Communication, vol. 27, no. 9, pp. 948-960, October 2012. doi:10.1016/j.image.2012.06.001
  • [2] J.-Y. Lee and Y.-J. Kim, “Optimal image quality assessment based on distortion classification and color perception,” KSII Transactions on Internet and Information Systems, vol. 10, no. 1, pp. 257-271, January 2016. doi:10.3837/tiis.2016.01.015
  • [3] O. Alaql, K. Ghazinour and C. C. Lu, “Classification of image distortions for image quality assessment,” in Proc. of International Conference on Computational Science and Computational Intelligence, 15-17 December 2016, Las Vegas, NV, USA [Online]. Available: IEEE Xplore, https://ieeexplore.ieee.org/document/7881422. [Accessed: 20 Sept. 2022].
  • [4] H. Wang, L. Zuo and J. Fu, “Distortion recognition for image quality assessment with convolutional neural network,” in Proc. of IEEE International Conference on Multimedia and Expo, 11- 15 July 2016, Seattle, WA, USA [Online]. Available: IEEE Xplore, https://ieeexplore.ieee.org/abstract/document/7552936. [Accessed: 20 Sept. 2022].
  • [5] H. Al-Bandawi and G. Deng, “Classification of image distortion based on the generalized Benford’s law,” Multimedia Tools and Applications, vol. 78, pp. 25611-25628, May 2019. doi:10.1007/s11042-019-7668-3
  • [6] M. Ha, Y. Byun, J. Kim, J. Lee, Y. Lee and S. Lee, “Selective deep convolutional neural network for low cost distorted image classification,” IEEE Access, vol 7, pp. 133030-133042, September 2019. doi:10.1109/ACCESS.2019.2939781
  • [7] O. Messai, F. Hachouf and Z. A. Seghir, “Automatic distortion type recognition for stereoscopic images,” in Proc. of International Conference on Advanced Electrical Engineering, 19-21 November 2019, Algiers, Algeria [Online]. Available: IEEE Xplore, https://ieeexplore.ieee.org/document/9015082. [Accessed: 20 Sept. 2022].
  • [8] M. Buczkowski and R. Stasiński, “Convolutional neural network-based image distortion classification,” in Proc. of International Conference on Systems, Signals and Image Processing, 5-7 June 2019, Osijek, Croatia [Online]. Available: IEEE Xplore, https://ieeexplore.ieee.org/document/8787212. [Accessed: 20 Sept. 2022].
  • [9] D. Liang, X. Gao, W. Lu and L. He, “Deep multi-label learning for image distortion identification,” Signal Processing, vol. 172, p. 107536, July 2020. doi:10.1016/j.sigpro.2020.107536
  • [10] Z. A. Khan, A. Beghdadi, M. Kaaniche and F. A. Cheikh, “Residual networks based distortion classification and ranking for laparoscopic image quality assessment,” in Proc. of IEEE International Conference on Image Processing, 25-28 October 2020, Abu Dhabi, United Arab Emirates [Online]. Available: IEEE Xplore, https://ieeexplore.ieee.org/document/9191111. [Accessed: 20 Sept. 2022].
  • [11] S. Bianco, L. Celona and P. Napoletano, “Disentangling image distortions in deep feature space,” Pattern Recognition Letters, vol. 148, pp.128-135, August 2021. doi:10.1016/j.patrec.2021.05.008
  • [12] K. M. Roccapriore, N. Creange, M. Ziatdinov and S. V. Kalinin, “Identification and correction of temporal and spatial distortions in scanning transmission electron microscopy,” Ultramicroscopy, vol. 229, p. 113337, October 2021. doi:10.1016/j.ultramic.2021.113337
  • [13] Y. Zhang, D.M. Chandler and X. Mou, “Quality assessment of multiply and singly distorted stereoscopic images via adaptive construction of cyclopean views,” Signal Processing: Image Communication, vol. 94, p. 116175, May 2021. doi:10.1016/j.image.2021.116175
  • [14] C. Yan, T. Teng, Y. Liu, Y. Zhang, H. Wang and X. Ji, “Precise no-reference image quality evaluation based on distortion identification,” ACM Trans. Multimedia Comput. Commun. Appl., vol. 17, no. 3s, pp. 110:1-110:21, November 2021. doi:10.1145/3468872
  • [15] D. Liang, X. Gao, W. Lu and J. Li, “Systemic distortion analysis with deep distortion directed image quality assessment models,” Signal Processing: Image Communication, vol. 109, p. 116870, November 2022. doi:10.1016/j.image.2022.116870
  • [16] Y. Wang, H. Li and S. Hou, “Distortion detection and removal integrated method for image restoration,” Digital Signal Processing, vol. 127, p. 103528, July 2022. doi:10.1016/j.dsp.2022.103528
  • [17] H. Fazlali, S. Shirani, M. Bradford, T. Kirubarajan, “Single image rain/snow removal using distortion type information,” Multimedia Tools and Applications, vol. 81, pp. 14105–14131, February 2022. doi: 10.1007/s11042-022-12012-0
  • [18] L. Xu and X. Jiang, “Blind image quality assessment by pairwise ranking image series,” China Communications, 2023. doi:10.23919/JCC.2023.00.102
  • [19] G. Li, F. Wang, L. Zhou, S. Jin, X. Xie, C. Ding, X. Pan, W. Zhang, “MCANet: Multi-channel attention network with multi-color space encoder for underwater image classification,” Computers and Electrical Engineering, vol. 108, p. 108724, May 2023. doi:10.1016/j.compeleceng.2023.108724
  • [20] L. H. F. P. Silva, J. D. Dias Júnior, J. F. B. Santos, J. F. Mari, M. C. Escarpinati and A. R. Backes, “Classification of UAVs' distorted images using convolutional neural networks,” in Proc. of XVI Workshop de Visão Computacional, October 2020 [Online]. Available: ResearchGate,https://www.researchgate.net/publication/344775288_Classification_of_UAVs'_distorted_images_using_Convolutional_Neural_Networks. [Accessed: 12 Agu. 2022].
  • [21] L. H. F. P. Silva, J. D. Dias Júnior, J. F. B. Santos, J. F. Mari, M. C. Escarpinati and A. R. Backes, “Non-linear distortion recognition in UAVs’ images using deep learning,” in Proc. of 16th International Conference on Computer Vision Theory and Applications, January 2021 [Online]. Available: ResearchGate, https://www.researchgate.net/publication/349381713_Non-linear_Distortion_Recognition_in_UAVs'_Images_using_Deep_Learning. [Accessed: 12 Agu. 2022]. [22] Y. Lu and S. Young, “A survey of public datasets for computer vision tasks in precision agriculture,” Computers and Electronics in Agriculture, vol. 178, p. 105760, November 2020. doi:10.1016/j.compag.2020.105760
  • [23] T. M. Giselsson, R. N. Jørgensen, P. K. Jensen, M. Dyrmann and H. S. Midtiby, “A public image database for benchmark of plant seedling classification algorithms,” arXiv preprint, 15 November 2017 [Online]. Available: arXiv, https://arxiv.org/abs/1711.05458. [Accessed: 7 Sept. 2022].
  • [24] N. Häni, P. Roy and V. Isler, “MinneApple: a benchmark dataset for apple detection and segmentation,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 852-858, January 2020. doi:10.1109/LRA.2020.2965061
  • [25] I. Sa , Z. Ge, F. Dayoub, B. Upcroft, T. Perez and C. McCool, “DeepFruits: a fruit detection system using deep neural networks,” Sensors, vol. 16, no. 8, August 2016. doi:10.3390/s16081222
  • [26] M. Krestenitis, E. K. Raptis, A. Ch. Kapoutsis, K. Ioannidis, E. B. Kosmatopoulos, S. Vrochidis and I. Kompatsiaris, “CoFly-WeedDB: A UAV image dataset for weed detection and species identification,” Data in Brief, vol. 45, p. 108575, December 2022. doi:10.1016/j.dib.2022.108575
  • [27] W. Burger and M. J. Burge, Digital Image Processing: An algorithmic introduction using java, Second Edition, Springer-Verlag London, 2016, p. 756.
  • [28] W. Burger and M. J. Burge, Digital Image Processing: An algorithmic introduction using java, Second Edition, Springer-Verlag London, 2016, p. 423.
  • [29] C. C. Aggarwal, Neural Networks and Deep Learning: A textbook, Springer International Publishing AG, 2018, p. 315.
  • [30] Y. Lecun, L. Bottou, Y. Bengio and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, November 1998. doi:10.1109/5.726791
  • [31] A. Krizhevsky, I. Sutskever and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. of NIPS, 2012 [Online]. Available: NeurIPS Proceedings, https://proceedings.neurips.cc/paper_files/paper/2012. [Accessed: 22 Dec. 2022].
  • [32] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in Proc. of ICLR, 2015 [Online]. Available: arXiv, https://arxiv.org/abs/1409.1556v6. [Accessed: 22 Dec. 2022].
Toplam 31 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Bilgisayar Yazılımı
Bölüm Araştırma Makalesi
Yazarlar

Şafak Altay Açar 0000-0001-6502-7456

Yayımlanma Tarihi 31 Ağustos 2023
Gönderilme Tarihi 17 Şubat 2023
Kabul Tarihi 15 Temmuz 2023
Yayımlandığı Sayı Yıl 2023 Cilt: 9 Sayı: 2

Kaynak Göster

IEEE Ş. Altay Açar, “Classification of Distortions in Agricultural Images Using Convolutional Neural Network”, GMBD, c. 9, sy. 2, ss. 174–182, 2023.

Gazi Journal of Engineering Sciences (GJES) publishes open access articles under a Creative Commons Attribution 4.0 International License (CC BY) 1366_2000-copia-2.jpg