Identification Of Walnut Variety From The Leaves Using Deep Learning Algorithms
Year 2023,
Volume: 12 Issue: 2, 531 - 543, 27.06.2023
Alper Talha Karadeniz
,
Erdal Başaran
,
Yuksel Celık
Abstract
In order to determine the variety from walnut leaves, each leaf must be examined in detail. Species that are very similar in color and shape to each other are very difficult to distinguish with the human eye. Examining and classifying plant leaves belonging to many classes one by one is not appropriate in terms of time and cost. Studies on walnut varieties in the literature are generally classified as a result of experimental studies in the laboratory environment. There are two or three different classes in studies using walnut leaf images. In this study, firstly, a unique walnut dataset obtained from 1751 walnut leaf images obtained from 18 different walnut varieties was created. Classification was made using deep learning methods on the original walnut dataset. It has been tested with CNN models, which are widely used in the literature, and some performance metrics are recorded and the results are compared. The images were first preprocessed for cropping, denoising and resizing. Classification was made using CNN models on the original dataset and augmented dataset with data augmentation method. It was seen that the VGG16 CNN model gave the best results both in the original dataset and the augmented dataset. In this model, the accucarcy result found with the original data set was 0.8552, while the accuracy result in the enhanced data set was 0.9055. When the accuracy values are examined, it is seen that walnut varieties are classified successfully.
Thanks
We thank the director, Dr. Yılmaz Boz, and the Yalova Atatürk Horticultural Central Research Institute staff for allowing us to create a dataset from the walnuts trees grown in the experimental orchard of the institute. Thanks to Prof. Dr. Turan KARADENİZ and Assist. Prof. Tuba BAK for their contribution to field study. The authors also thank Assist. Prof. Emrah GÜLER for English revision of the manuscript.
References
- [1] K. Kulkarni, Z. Zhang, L. Chang, J. Yang, P. C. A. da Fonseca, and D. Barford, “Building a pseudo-atomic model of the anaphase-promoting complex,” Acta Crystallogr. Sect. D Biol. Crystallogr., vol. 69, no. 11, pp. 2236–2243, 2013.
- [2] F. Doğan and İ. Türkoğlu, “Derin öğrenme algoritmalarının yaprak sınıflandırma başarımlarının karşılaştırılması,” Sak. Univ. J. Comput. Inf. Sci., vol. 1, no. 1, pp. 10–21, 2018.
- [3] S. Solak and U. Altınışık, “Görüntü işleme teknikleri ve kümeleme yöntemleri kullanılarak fındık meyvesinin tespit ve sınıflandırılması,” Sak. Univ. J. Sci., vol. 22, no. 1, pp. 56–65, 2018.
- [4] Ü. Atila, M. Uçar, K. Akyol, and E. Uçar, “Plant leaf disease classification using EfficientNet deep learning model,” Ecol. Inform., vol. 61, p. 101182, 2021.
- [5] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
[6] M. H. Saleem, J. Potgieter, and K. M. Arif, “Plant disease detection and classification by deep learning,” Plants, vol. 8, no. 11, p. 468, 2019.
- [7] B. Yanıkoğlu and E. Aptoula, “İçeriğe Dayalı İmge Erişim Yöntemleri ile Bitki Tanıma,” 2016.
- [8] Y. Liu, J. Su, G. Xu, Y. Fang, F. Liu, and B. Su, “Identification of grapevine (vitis vinifera l.) cultivars by vine leaf image via deep learning and mobile devices,” 2020.
- [9] M. Reichstein, G. Camps-Valls, B. Stevens, M. Jung, J. Denzler, and N. Carvalhais, “Deep learning and process understanding for data-driven Earth system science,” Nature, vol. 566, no. 7743, pp. 195–204, 2019.
- [10] T. Karahan and V. Nabiyev, “Plant identification with convolutional neural networks and transfer learning,” Pamukkale Üniversitesi Mühendislik Bilim. Derg., vol. 27, no. 5, pp. 638–645, 2021.
- [11] I. M. Dheir, A. Soliman, A. Mettleq, and A. A. Elsharif, “Nuts Types Classification Using Deep learning,” Int. J. Acad. Inf. Syst. Res., vol. 3, no. 12, pp. 12–17, 2019.
- [12] D. K. Nkemelu, D. Omeiza, and N. Lubalo, “Deep convolutional neural network for plant seedlings classification,” arXiv Prepr. arXiv1811.08404, 2018.
- [13] Y. Sun, Y. Liu, G. Wang, and H. Zhang, “Deep learning for plant identification in natural environment,” Comput. Intell. Neurosci., vol. 2017, 2017.
- [14] A. T. Karadeniz, Y. Çelik, and E. Başaran, “Classification of walnut varieties obtained from walnut leaf images by the recommended residual block based CNN model,” Eur. Food Res. Technol., vol. 249, no. 3, pp. 727–738, 2023.
- [15] R. C. Joshi, M. Kaushik, M. K. Dutta, A. Srivastava, and N. Choudhary, “VirLeafNet: Automatic analysis and viral disease diagnosis using deep-learning in Vigna mungo plant,” Ecol. Inform., vol. 61, p. 101197, 2021.
- [16] L. M. Tassis, J. E. T. de Souza, and R. A. Krohling, “A deep learning approach combining instance and semantic segmentation to identify diseases and pests of coffee leaves from in-field images,” Comput. Electron. Agric., vol. 186, p. 106191, 2021.
- [17] A. Anagnostis et al., “A deep learning approach for anthracnose infected trees classification in walnut orchards,” Comput. Electron. Agric., vol. 182, p. 105998, 2021.
- [18] A. Anagnostis, G. Asiminari, E. Papageorgiou, and D. Bochtis, “A convolutional neural networks based method for anthracnose infected walnut tree leaves identification,” Appl. Sci., vol. 10, no. 2, p. 469, 2020.
- [19] A. Dobrescu, M. V. Giuffrida, and S. A. Tsaftaris, “Doing more with less: a multitask deep learning approach in plant phenotyping,” Front. Plant Sci., vol. 11, p. 141, 2020.
- [20] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst., vol. 25, pp. 1097–1105, 2012.
- [21] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90, 2017.
- [22] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826, 2016.
- [23] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv Prepr. arXiv1409.1556, 2014.
- [24] T. Carvalho, E. R. S. De Rezende, M. T. P. Alves, F. K. C. Balieiro, and R. B. Sovat, “Exposing computer generated images by eye’s region classification via transfer learning of VGG19 CNN,” in 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 866–870, 2017
- [25] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016
- [26] Y. Dong, H. Zhang, C. Wang, and Y. Wang, “Fine-grained ship classification based on deep residual learning for high-resolution SAR images,” Remote Sens. Lett., vol. 10, no. 11, pp. 1095–1104, 2019.
- [27] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International Conference on Machine Learning, pp. 6105–6114, 2019.
- [28] W. Setiawan and A. Purnama, “Tobacco Leaf Images Clustering using DarkNet19 and K-Means,” in 2020 6th Information Technology International Seminar (ITIS), pp. 269–273, 2020.
- [29] C. Szegedy et al., “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9, 2015.
- [30] M. Sokolova and G. Lapalme, “A systematic analysis of performance measures for classification tasks,” Inf. Process. Manag., vol. 45, no. 4, pp. 427–437, 2009.
- [31] E. Başaran, “Classification of white blood cells with SVM by selecting SqueezeNet and LIME properties by mRMR method,” Signal, Image Video Process., pp. 1–9, 2022.
Year 2023,
Volume: 12 Issue: 2, 531 - 543, 27.06.2023
Alper Talha Karadeniz
,
Erdal Başaran
,
Yuksel Celık
References
- [1] K. Kulkarni, Z. Zhang, L. Chang, J. Yang, P. C. A. da Fonseca, and D. Barford, “Building a pseudo-atomic model of the anaphase-promoting complex,” Acta Crystallogr. Sect. D Biol. Crystallogr., vol. 69, no. 11, pp. 2236–2243, 2013.
- [2] F. Doğan and İ. Türkoğlu, “Derin öğrenme algoritmalarının yaprak sınıflandırma başarımlarının karşılaştırılması,” Sak. Univ. J. Comput. Inf. Sci., vol. 1, no. 1, pp. 10–21, 2018.
- [3] S. Solak and U. Altınışık, “Görüntü işleme teknikleri ve kümeleme yöntemleri kullanılarak fındık meyvesinin tespit ve sınıflandırılması,” Sak. Univ. J. Sci., vol. 22, no. 1, pp. 56–65, 2018.
- [4] Ü. Atila, M. Uçar, K. Akyol, and E. Uçar, “Plant leaf disease classification using EfficientNet deep learning model,” Ecol. Inform., vol. 61, p. 101182, 2021.
- [5] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
[6] M. H. Saleem, J. Potgieter, and K. M. Arif, “Plant disease detection and classification by deep learning,” Plants, vol. 8, no. 11, p. 468, 2019.
- [7] B. Yanıkoğlu and E. Aptoula, “İçeriğe Dayalı İmge Erişim Yöntemleri ile Bitki Tanıma,” 2016.
- [8] Y. Liu, J. Su, G. Xu, Y. Fang, F. Liu, and B. Su, “Identification of grapevine (vitis vinifera l.) cultivars by vine leaf image via deep learning and mobile devices,” 2020.
- [9] M. Reichstein, G. Camps-Valls, B. Stevens, M. Jung, J. Denzler, and N. Carvalhais, “Deep learning and process understanding for data-driven Earth system science,” Nature, vol. 566, no. 7743, pp. 195–204, 2019.
- [10] T. Karahan and V. Nabiyev, “Plant identification with convolutional neural networks and transfer learning,” Pamukkale Üniversitesi Mühendislik Bilim. Derg., vol. 27, no. 5, pp. 638–645, 2021.
- [11] I. M. Dheir, A. Soliman, A. Mettleq, and A. A. Elsharif, “Nuts Types Classification Using Deep learning,” Int. J. Acad. Inf. Syst. Res., vol. 3, no. 12, pp. 12–17, 2019.
- [12] D. K. Nkemelu, D. Omeiza, and N. Lubalo, “Deep convolutional neural network for plant seedlings classification,” arXiv Prepr. arXiv1811.08404, 2018.
- [13] Y. Sun, Y. Liu, G. Wang, and H. Zhang, “Deep learning for plant identification in natural environment,” Comput. Intell. Neurosci., vol. 2017, 2017.
- [14] A. T. Karadeniz, Y. Çelik, and E. Başaran, “Classification of walnut varieties obtained from walnut leaf images by the recommended residual block based CNN model,” Eur. Food Res. Technol., vol. 249, no. 3, pp. 727–738, 2023.
- [15] R. C. Joshi, M. Kaushik, M. K. Dutta, A. Srivastava, and N. Choudhary, “VirLeafNet: Automatic analysis and viral disease diagnosis using deep-learning in Vigna mungo plant,” Ecol. Inform., vol. 61, p. 101197, 2021.
- [16] L. M. Tassis, J. E. T. de Souza, and R. A. Krohling, “A deep learning approach combining instance and semantic segmentation to identify diseases and pests of coffee leaves from in-field images,” Comput. Electron. Agric., vol. 186, p. 106191, 2021.
- [17] A. Anagnostis et al., “A deep learning approach for anthracnose infected trees classification in walnut orchards,” Comput. Electron. Agric., vol. 182, p. 105998, 2021.
- [18] A. Anagnostis, G. Asiminari, E. Papageorgiou, and D. Bochtis, “A convolutional neural networks based method for anthracnose infected walnut tree leaves identification,” Appl. Sci., vol. 10, no. 2, p. 469, 2020.
- [19] A. Dobrescu, M. V. Giuffrida, and S. A. Tsaftaris, “Doing more with less: a multitask deep learning approach in plant phenotyping,” Front. Plant Sci., vol. 11, p. 141, 2020.
- [20] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst., vol. 25, pp. 1097–1105, 2012.
- [21] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90, 2017.
- [22] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2818–2826, 2016.
- [23] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv Prepr. arXiv1409.1556, 2014.
- [24] T. Carvalho, E. R. S. De Rezende, M. T. P. Alves, F. K. C. Balieiro, and R. B. Sovat, “Exposing computer generated images by eye’s region classification via transfer learning of VGG19 CNN,” in 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 866–870, 2017
- [25] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016
- [26] Y. Dong, H. Zhang, C. Wang, and Y. Wang, “Fine-grained ship classification based on deep residual learning for high-resolution SAR images,” Remote Sens. Lett., vol. 10, no. 11, pp. 1095–1104, 2019.
- [27] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International Conference on Machine Learning, pp. 6105–6114, 2019.
- [28] W. Setiawan and A. Purnama, “Tobacco Leaf Images Clustering using DarkNet19 and K-Means,” in 2020 6th Information Technology International Seminar (ITIS), pp. 269–273, 2020.
- [29] C. Szegedy et al., “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9, 2015.
- [30] M. Sokolova and G. Lapalme, “A systematic analysis of performance measures for classification tasks,” Inf. Process. Manag., vol. 45, no. 4, pp. 427–437, 2009.
- [31] E. Başaran, “Classification of white blood cells with SVM by selecting SqueezeNet and LIME properties by mRMR method,” Signal, Image Video Process., pp. 1–9, 2022.