Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2023, Cilt: 12 Sayı: 3, 746 - 763, 28.09.2023
https://doi.org/10.17798/bitlisfen.1294417

Öz

Kaynakça

  • [1] Y. Seo and K. Shin, “Hierarchical convolutional neural networks for fashion image classification,” Expert Syst. Appl., vol. 116, pp. 328–339, 2019, doi: 10.1016/j.eswa.2018.09.022.
  • [2] S. G. Eshwar, G. G. P. J, A. V Rishikesh, N. A. Charan, and V. Umadevi, “Apparel classification using Convolutional Neural Networks,” in 2016 International Conference on ICT in Business Industry & Government (ICTBIG), 2016, pp. 1–5. doi: 10.1109/ICTBIG.2016.7892641.
  • [3] K. Hara, V. Jagadeesh, and R. Piramuthu, “Fashion apparel detection: The role of deep convolutional neural network and pose-dependent priors,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–9, 2016. doi: 10.1109/WACV.2016.7477611.
  • [4] M. Kayed, A. Anter, and H. Mohamed, “Classification of garments from fashion MNIST dataset using CNN LeNet-5 architecture,” in 2020 International Conference on Innovative Trends in Communication and Computer Engineering (ITCE), pp. 238–243, 2020,. doi: 10.1109/ITCE48509.2020.9047776.
  • [5] S. Metlek, “A new proposal for the prediction of an aircraft engine fuel consumption: a novel CNN-BiLSTM deep neural network model,” Aircr. Eng. Aerosp. Technol., vol. 95, no. 5, pp. 838–848, Jan. 2023, doi: 10.1108/AEAT-05-2022-0132.
  • [6] A. Kishwar and A. Zafar, “Fake news detection on Pakistani news using machine learning and deep learning,” Expert Syst. Appl., vol. 211, p. 118558, 2023, doi: 10.1016/j.eswa.2022.118558.
  • [7] M. S. Khan, N. Tafshir, K. N. Alam, A.R. Dhruba, M. M. Khan, A. A. Albraikan, and F. A. Almalki, “Deep learning for ocular disease recognition: an ınner-class balance,” Comput. Intell. Neurosci., vol. 2022, 2022.
  • [8] H. Çetiner and B. Kara, “Recurrent neural network based model development for wheat yield forecasting,” J. Eng. Sci. Adiyaman Univ., vol. 9, no. 16, pp. 204–218, 2022, doi: 10.54365/adyumbd.1075265.
  • [9] J. Reizenstein, R. Shapovalov, P. Henzler, L. Sbordone, P. Labatut, and D. Novotny, “Common objects in 3d: large-scale learning and evaluation of real-life 3d category reconstruction,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10901–10911, 2021.
  • [10] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, 2015, doi: 10.1007/s11263-015-0816-y.
  • [11] M. A. Morid, A. Borjali, and G. Del Fiol, “A scoping review of transfer learning research on medical image analysis using ImageNet,” Comput. Biol. Med., vol. 128, p. 104115, 2021, doi: 10.1016/j.compbiomed.2020.104115.
  • [12] X. Wang and T. Zhang, “Clothes search in consumer photos via color matching and attribute learning,” in Proceedings of the 19th ACM international conference on Multimedia, pp. 1353–1356, 2011.
  • [13] K. V Greeshma and K. Sreekumar, “Hyperparameter optimization and regularization on fashion-MNIST classification,” Int. J. Recent Technol. Eng., vol. 8, no. 2, pp. 3713–3719, 2019.
  • [14] S. Bhatnagar, D. Ghosal, and M. H. Kolekar, “Classification of fashion article images using convolutional neural networks,” in 2017 Fourth International Conference on Image Information Processing (ICIIP), pp. 1–6, 2017. doi: 10.1109/ICIIP.2017.8313740.
  • [15] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv Prepr. arXiv1708.07747, 2017.
  • [16] H. Çetiner, “Cataract disease classification from fundus images with transfer learning based deep learning model on two ocular disease datasets,” Gumushane University Journal of Science and Technology, vol. 13, no. 2, pp. 258–269, Jan. 2023, doi: 10.17714/gumusfenbil.1168842.
  • [17] S. Suganyadevi, V. Seethalakshmi, and K. Balasamy, “A review on deep learning in medical image analysis,” Int. J. Multimed. Inf. Retr., vol. 11, no. 1, pp. 19–38, 2022, doi: 10.1007/s13735-021-00218-1.
  • [18] D. Kingma and J. Ba, “Adam: a method for stochastic optimization,” Int. Conf. Learn. Represent., Dec. 2014.
  • [19] Vijayalakshmi A and Rajesh Kanna B, “Deep learning approach to detect malaria from microscopic images,” Multimed. Tools Appl., vol. 79, no. 21–22, pp. 15297–15317, Jun. 2020, doi: 10.1007/s11042-019-7162-y.
  • [20] S. Zhang, W. Huang, and C. Zhang, “Three-channel convolutional neural networks for vegetable leaf disease recognition,” Cogn. Syst. Res., vol. 53, pp. 31–41, 2019, doi: 10.1016/j.cogsys.2018.04.006.
  • [21] S. Zhang, S. Zhang, C. Zhang, X. Wang, and Y. Shi, “Cucumber leaf disease identification with global pooling dilated convolutional neural network,” Comput. Electron. Agric., vol. 162, pp. 422–430, 2019, doi: 10.1016/j.compag.2019.03.012.
  • [22] N. Saxena, V. Sharma, R. Sharma, K. K. Sharma, and S. Gupta, “Design, modeling, and frequency domain analysis with parametric variation for fixed-guided vibrational piezoelectric energy harvesters,” Microprocess. Microsyst., vol. 95, p. 104692, Nov. 2022, doi: 10.1016/j.micpro.2022.104692.
  • [23] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst., vol. 25, 2012.
  • [24] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European conference on computer vision, pp. 818–833, 2014.
  • [25] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv Prepr. arXiv1409.1556, 2014.
  • [26] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” 2017.
  • [27] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, Jun. 2016. doi: 10.1109/CVPR.2016.90.
  • [28] C. Dong, Y. Cai, S. Dai, J. Wu, G. Tong, W. Wang, Z. Wu, H. Zhang, and J. Xia, “An optimized optical diffractive deep neural network with OReLU function based on genetic algorithm,” Opt. Laser Technol., vol. 160, p. 109104, 2023, doi: 10.1016/j.optlastec.2022.109104.
  • [29] Y. Sun, M. Dong, M. Yu, L. Lu, S. Liang, J. Xia, and L. Zhu, “Modeling and simulation of all-optical diffractive neural network based on nonlinear optical materials,” Opt. Lett., vol. 47, no. 1, pp. 126–129, 2022, doi: 10.1364/OL.442970.
  • [30] T. Zhou, X. Lin, J. Wu, Y. Chen, H. Xie, Y. Li, J. Fan, H. Wu, L. Fang, and Q. Dai, “Large-scale neuromorphic optoelectronic computing with a reconfigurable diffractive processing unit,” Nat. Photonics, vol. 15, no. 5, pp. 367–373, 2021, doi: 10.1038/s41566-021-00796-w.
  • [31] H. Dou, Y. Deng, T. Yan, H. Wu, X. Lin, and Q. Dai, “Residual D2NN: training diffractive deep neural networks via learnable light shortcuts,” Opt. Lett., vol. 45, no. 10, pp. 2688–2691, 2020, doi: 10.1364/OL.389696.
  • [32] C. Shan, A. Li, and X. Chen, “Deep delay rectified neural networks,” J. Supercomput., vol. 79, no. 1, pp. 880–896, 2023, doi: 10.1007/s11227-022-04704-z.
  • [33] J. Zhang, Q. Xu, L. Guo, L. Ding, and S. Ding, “A novel capsule network based on deep routing and residual learning,” Soft Comput., 2023, doi: 10.1007/s00500-023-08018-x.

CNNTuner: Image Classification with A Novel CNN Model Optimized Hyperparameters

Yıl 2023, Cilt: 12 Sayı: 3, 746 - 763, 28.09.2023
https://doi.org/10.17798/bitlisfen.1294417

Öz

Today, the impact of deep learning in computer vision applications is growing every day. Deep learning techniques apply in many areas such as clothing search, automatic product recommendation. The main task in these applications is to perform the classification process automatically. But, high similarities between multiple apparel objects make classification difficult. In this paper, a new deep learning model based on convolutional neural networks (CNNs) is proposed to solve the classification problem. These networks can extract features from images using convolutional layers, unlike traditional machine learning algorithms. As the extracted features are highly discriminative, good results can be obtained in terms of classification performance. Performance results vary according to the number of filters and window sizes in the convolution layers that extract the features. Considering that there is more than one parameter that influences the performance result, the parameter that gives the best result can be determined after many experimental studies. The specified parameterization process is a difficult and laborious process. To address this issue, the parameters of a newly proposed CNN-based deep learning model were optimized using the Keras Tuner tool on the Fashion MNIST (F-MNIST) dataset containing multi-class fashion images. The performance results of the model were obtained using the data separated according to the cross-validation technique 5. At the same time, to measure the impact of the optimized parameters on classification, the performance results of the proposed model, called CNNTuner, are compared with state-of-the-art (SOTA) studies.

Kaynakça

  • [1] Y. Seo and K. Shin, “Hierarchical convolutional neural networks for fashion image classification,” Expert Syst. Appl., vol. 116, pp. 328–339, 2019, doi: 10.1016/j.eswa.2018.09.022.
  • [2] S. G. Eshwar, G. G. P. J, A. V Rishikesh, N. A. Charan, and V. Umadevi, “Apparel classification using Convolutional Neural Networks,” in 2016 International Conference on ICT in Business Industry & Government (ICTBIG), 2016, pp. 1–5. doi: 10.1109/ICTBIG.2016.7892641.
  • [3] K. Hara, V. Jagadeesh, and R. Piramuthu, “Fashion apparel detection: The role of deep convolutional neural network and pose-dependent priors,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–9, 2016. doi: 10.1109/WACV.2016.7477611.
  • [4] M. Kayed, A. Anter, and H. Mohamed, “Classification of garments from fashion MNIST dataset using CNN LeNet-5 architecture,” in 2020 International Conference on Innovative Trends in Communication and Computer Engineering (ITCE), pp. 238–243, 2020,. doi: 10.1109/ITCE48509.2020.9047776.
  • [5] S. Metlek, “A new proposal for the prediction of an aircraft engine fuel consumption: a novel CNN-BiLSTM deep neural network model,” Aircr. Eng. Aerosp. Technol., vol. 95, no. 5, pp. 838–848, Jan. 2023, doi: 10.1108/AEAT-05-2022-0132.
  • [6] A. Kishwar and A. Zafar, “Fake news detection on Pakistani news using machine learning and deep learning,” Expert Syst. Appl., vol. 211, p. 118558, 2023, doi: 10.1016/j.eswa.2022.118558.
  • [7] M. S. Khan, N. Tafshir, K. N. Alam, A.R. Dhruba, M. M. Khan, A. A. Albraikan, and F. A. Almalki, “Deep learning for ocular disease recognition: an ınner-class balance,” Comput. Intell. Neurosci., vol. 2022, 2022.
  • [8] H. Çetiner and B. Kara, “Recurrent neural network based model development for wheat yield forecasting,” J. Eng. Sci. Adiyaman Univ., vol. 9, no. 16, pp. 204–218, 2022, doi: 10.54365/adyumbd.1075265.
  • [9] J. Reizenstein, R. Shapovalov, P. Henzler, L. Sbordone, P. Labatut, and D. Novotny, “Common objects in 3d: large-scale learning and evaluation of real-life 3d category reconstruction,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10901–10911, 2021.
  • [10] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei, “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, 2015, doi: 10.1007/s11263-015-0816-y.
  • [11] M. A. Morid, A. Borjali, and G. Del Fiol, “A scoping review of transfer learning research on medical image analysis using ImageNet,” Comput. Biol. Med., vol. 128, p. 104115, 2021, doi: 10.1016/j.compbiomed.2020.104115.
  • [12] X. Wang and T. Zhang, “Clothes search in consumer photos via color matching and attribute learning,” in Proceedings of the 19th ACM international conference on Multimedia, pp. 1353–1356, 2011.
  • [13] K. V Greeshma and K. Sreekumar, “Hyperparameter optimization and regularization on fashion-MNIST classification,” Int. J. Recent Technol. Eng., vol. 8, no. 2, pp. 3713–3719, 2019.
  • [14] S. Bhatnagar, D. Ghosal, and M. H. Kolekar, “Classification of fashion article images using convolutional neural networks,” in 2017 Fourth International Conference on Image Information Processing (ICIIP), pp. 1–6, 2017. doi: 10.1109/ICIIP.2017.8313740.
  • [15] H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,” arXiv Prepr. arXiv1708.07747, 2017.
  • [16] H. Çetiner, “Cataract disease classification from fundus images with transfer learning based deep learning model on two ocular disease datasets,” Gumushane University Journal of Science and Technology, vol. 13, no. 2, pp. 258–269, Jan. 2023, doi: 10.17714/gumusfenbil.1168842.
  • [17] S. Suganyadevi, V. Seethalakshmi, and K. Balasamy, “A review on deep learning in medical image analysis,” Int. J. Multimed. Inf. Retr., vol. 11, no. 1, pp. 19–38, 2022, doi: 10.1007/s13735-021-00218-1.
  • [18] D. Kingma and J. Ba, “Adam: a method for stochastic optimization,” Int. Conf. Learn. Represent., Dec. 2014.
  • [19] Vijayalakshmi A and Rajesh Kanna B, “Deep learning approach to detect malaria from microscopic images,” Multimed. Tools Appl., vol. 79, no. 21–22, pp. 15297–15317, Jun. 2020, doi: 10.1007/s11042-019-7162-y.
  • [20] S. Zhang, W. Huang, and C. Zhang, “Three-channel convolutional neural networks for vegetable leaf disease recognition,” Cogn. Syst. Res., vol. 53, pp. 31–41, 2019, doi: 10.1016/j.cogsys.2018.04.006.
  • [21] S. Zhang, S. Zhang, C. Zhang, X. Wang, and Y. Shi, “Cucumber leaf disease identification with global pooling dilated convolutional neural network,” Comput. Electron. Agric., vol. 162, pp. 422–430, 2019, doi: 10.1016/j.compag.2019.03.012.
  • [22] N. Saxena, V. Sharma, R. Sharma, K. K. Sharma, and S. Gupta, “Design, modeling, and frequency domain analysis with parametric variation for fixed-guided vibrational piezoelectric energy harvesters,” Microprocess. Microsyst., vol. 95, p. 104692, Nov. 2022, doi: 10.1016/j.micpro.2022.104692.
  • [23] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst., vol. 25, 2012.
  • [24] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in European conference on computer vision, pp. 818–833, 2014.
  • [25] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv Prepr. arXiv1409.1556, 2014.
  • [26] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-resnet and the impact of residual connections on learning,” 2017.
  • [27] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, Jun. 2016. doi: 10.1109/CVPR.2016.90.
  • [28] C. Dong, Y. Cai, S. Dai, J. Wu, G. Tong, W. Wang, Z. Wu, H. Zhang, and J. Xia, “An optimized optical diffractive deep neural network with OReLU function based on genetic algorithm,” Opt. Laser Technol., vol. 160, p. 109104, 2023, doi: 10.1016/j.optlastec.2022.109104.
  • [29] Y. Sun, M. Dong, M. Yu, L. Lu, S. Liang, J. Xia, and L. Zhu, “Modeling and simulation of all-optical diffractive neural network based on nonlinear optical materials,” Opt. Lett., vol. 47, no. 1, pp. 126–129, 2022, doi: 10.1364/OL.442970.
  • [30] T. Zhou, X. Lin, J. Wu, Y. Chen, H. Xie, Y. Li, J. Fan, H. Wu, L. Fang, and Q. Dai, “Large-scale neuromorphic optoelectronic computing with a reconfigurable diffractive processing unit,” Nat. Photonics, vol. 15, no. 5, pp. 367–373, 2021, doi: 10.1038/s41566-021-00796-w.
  • [31] H. Dou, Y. Deng, T. Yan, H. Wu, X. Lin, and Q. Dai, “Residual D2NN: training diffractive deep neural networks via learnable light shortcuts,” Opt. Lett., vol. 45, no. 10, pp. 2688–2691, 2020, doi: 10.1364/OL.389696.
  • [32] C. Shan, A. Li, and X. Chen, “Deep delay rectified neural networks,” J. Supercomput., vol. 79, no. 1, pp. 880–896, 2023, doi: 10.1007/s11227-022-04704-z.
  • [33] J. Zhang, Q. Xu, L. Guo, L. Ding, and S. Ding, “A novel capsule network based on deep routing and residual learning,” Soft Comput., 2023, doi: 10.1007/s00500-023-08018-x.
Toplam 33 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Mühendislik
Bölüm Araştırma Makalesi
Yazarlar

Halit Çetiner 0000-0001-7794-2555

Sedat Metlek 0000-0002-0393-9908

Erken Görünüm Tarihi 23 Eylül 2023
Yayımlanma Tarihi 28 Eylül 2023
Gönderilme Tarihi 9 Mayıs 2023
Kabul Tarihi 13 Ağustos 2023
Yayımlandığı Sayı Yıl 2023 Cilt: 12 Sayı: 3

Kaynak Göster

IEEE H. Çetiner ve S. Metlek, “CNNTuner: Image Classification with A Novel CNN Model Optimized Hyperparameters”, Bitlis Eren Üniversitesi Fen Bilimleri Dergisi, c. 12, sy. 3, ss. 746–763, 2023, doi: 10.17798/bitlisfen.1294417.



Bitlis Eren Üniversitesi
Fen Bilimleri Dergisi Editörlüğü

Bitlis Eren Üniversitesi Lisansüstü Eğitim Enstitüsü        
Beş Minare Mah. Ahmet Eren Bulvarı, Merkez Kampüs, 13000 BİTLİS        
E-posta: fbe@beu.edu.tr