Araştırma Makalesi
BibTex RIS Kaynak Göster

Designing Effective Models for COVID-19 Diagnosis through Transfer Learning and Interlayer Visualization

Yıl 2023, Cilt: 11 Sayı: 4, 340 - 345, 22.12.2023
https://doi.org/10.17694/bajece.1274253

Öz

Creating a model from scratch that fits the dataset can be laborious and time-consuming. The level of difficulty in designing a new model can vary depending on factors such as the complexity of the model and the size and characteristics of the dataset. Factors such as the number of variables in the dataset, the structure of the data, class imbalance, and the size of the dataset are important in deciding which model to use. In addition, long experimental studies are required to design the most appropriate model for the dataset. In this study, we investigated how transfer learning models can be utilized to solve this problem. Experimental studies were conducted on the Covid-19 dataset with transfer learning models and the most successful transfer learning models were identified. Then, layers that did not contribute to the performance of the transfer learning models and could not extract the necessary features from the dataset were identified and removed from the model. After removing the unnecessary layers from the model, new models with fast, less complex and fewer parameters were obtained. In the studies conducted with the new models derived from the most successful transfer learning models with the inter-layer imaging method, the classes were classified with an accuracy of %98.8 and the images belonging to the Covid-19 class were classified with a precision of %99.7.

Kaynakça

  • [1] H. E. Kim, A. Cosa-Linan, N. Santhanam, M. Jannesari, M. E. Maros, and T. Ganslandt, ‘Transfer learning for medical image classification: a literature review’, BMC Med. Imaging, vol. 22, no. 1, p. 69, Apr. 2022.
  • [2] S. Atasever, N. Azginoglu, D. S. Terzi, and R. Terzi, ‘A comprehensive survey of deep learning research on medical image analysis with focus on transfer learning’, Clin. Imaging, vol. 94, pp. 18–41, Feb. 2023.
  • [3] H. S. Nogay,, T.C. Akinci., & M. Yilmaz, “Comparative experimental investigation and application of five classic pre-trained deep convolutional neural networks via transfer learning for diagnosis of breast cancer”. Advances in Science and Technology, 15(3), 2021.
  • [4] Y. Dogan and H. Y. Keles, ‘Iterative facial image inpainting based on an encoder-generator architecture’, Neural Comput. Appl., vol. 34, no. 12, pp. 10001–10021, Jun. 2022.
  • [5] M. Ataş, C. Özdemi̇r, İ. Ataş, B. Ak, and E. Özeroğlu, ‘Biometric identification using panoramic dental radiographic images withfew-shot learning’, TURK. J. OF ELECTR. ENG. COMPUT. SCI., vol. 30, no. 3, pp. 1115–1126, Jan. 2022.
  • [6] Y. Dogan and H. Yalim Keles, ‘Stability and diversity in generative adversarial networks’, in 2019 27th Signal Processing and Communications Applications Conference (SIU), Sivas, Turkey, 2019.
  • [7] C. Ozdemir, M. A. Gedik, and Y. Kaya, ‘Age estimation from left-hand radiographs with deep learning methods’, Trait. Du Signal, vol. 38, no. 6, pp. 1565–1574, Dec. 2021.
  • [8] M. Iman, K. Rasheed, and H. R. Arabnia, ‘A review of Deep Transfer Learning and recent advancements’, arXiv [cs.LG], 18-Jan-2022.
  • [9] M. D. Zeiler and R. Fergus, ‘Visualizing and understanding convolutional networks’, in Computer Vision – ECCV 2014, Cham: Springer International Publishing, 2014, pp. 818–833.
  • [10] G. Urban et al., ‘Do deep convolutional nets really need to be deep and convolutional?’, arXiv [stat.ML], 17-Mar-2016.
  • [11] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba, ‘Network dissection: Quantifying interpretability of deep visual representations’, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017.
  • [12] H. Li, Z. Xu, G. Taylor, C. Studer, and T. Goldstein, ‘Visualizing the loss landscape of neural nets’, arXiv [cs.LG], 28-Dec-2017.
  • [13] H. Harapan et al., ‘Coronavirus disease 2019 (COVID-19): A literature review’, J. Infect. Public Health, vol. 13, no. 5, pp. 667–673, May 2020.
  • [14] J. Elliott, M. Whitaker, B. Bodinier, O. Eales, S. Riley, H. Ward, ... & P. Elliott, Predictive symptoms for COVID-19 in the community: REACT-1 study of over 1 million people. PLoS medicine, 18(9), e1003777., 2021.
  • [15] U. Jain, Effect of COVID-19 on the Organs. Cureus, 12(8)., 2020
  • [16] D. L. Weiner, V. Balasubramaniam, S. I. Shah, J. R. Javier, and Pediatric Policy Council, ‘COVID-19 impact on research, lessons learned from COVID-19 research, implications for pediatric research’, Pediatr. Res., vol. 88, no. 2, pp. 148–150, Aug. 2020.
  • [17] Ertuğrul, Ö. F., Emrullah, A. C. A. R., Öztekin, A., & Aldemir, E. (2021). Detection of Covid-19 from X-ray images via ensemble of features extraction methods employing randomized neural networks. European Journal of Technique (EJT), 11(2), 248-254.
  • [18] Kaya, Y., Yiner, Z., Kaya, M., & Kuncan, F. (2022). A new approach to COVID-19 detection from X-ray images using angle transformation with GoogleNet and LSTM. Measurement Science and Technology, 33(12), 124011
  • [19] Kaggle. COVID-19 Radiography Database. https://www.kaggle.com/datasets/tawsifurrahman/covid19-radiography-database. Accessed 18 January 2023
  • [20] M. F. Aslan, K. Sabanci, A. Durdu, and M. F. Unlersen, ‘COVID-19 diagnosis using state-of-the-art CNN architecture features and Bayesian Optimization’, Comput. Biol. Med., vol. 142, no. 105244, p. 105244, Mar. 2022.
  • [21] M. F. Sohan, A. Basalamah, and M. Solaiman, ‘COVID-19 detection using machine learning: a large scale assessment of x-ray and CT image datasets’, J. Electron. Imaging, vol. 31, no. 04, Mar. 2022.
  • [22] A. T. Sahlol, D. Yousri, A. A. Ewees, M. A. A. Al-Qaness, R. Damasevicius, and M. A. Elaziz, ‘COVID-19 image classification using deep features and fractional-order marine predators algorithm’, Sci. Rep., vol. 10, no. 1, p. 15364, Sep. 2020.
  • [23] J. Abdollahi and L. Mahmoudi, ‘An artificial intelligence system for detecting the types of the epidemic from X-rays : Artificial intelligence system for detecting the types of the epidemic from X-rays’, in 2022 27th International Computer Conference, Computer Society of Iran (CSICC), Tehran, Iran, Islamic Republic of, 2022.
  • [24] R. Abdrakhmanov, M. Altynbekov, A. Abu, A. Shomanov, D. Viderman, and M.-H. Lee, ‘Few-shot learning approach for COVID-19 detection from X-ray images’, in 2021 16th International Conference on Electronics Computer and Computation (ICECCO), Kaskelen, Kazakhstan, 2021.
Yıl 2023, Cilt: 11 Sayı: 4, 340 - 345, 22.12.2023
https://doi.org/10.17694/bajece.1274253

Öz

Kaynakça

  • [1] H. E. Kim, A. Cosa-Linan, N. Santhanam, M. Jannesari, M. E. Maros, and T. Ganslandt, ‘Transfer learning for medical image classification: a literature review’, BMC Med. Imaging, vol. 22, no. 1, p. 69, Apr. 2022.
  • [2] S. Atasever, N. Azginoglu, D. S. Terzi, and R. Terzi, ‘A comprehensive survey of deep learning research on medical image analysis with focus on transfer learning’, Clin. Imaging, vol. 94, pp. 18–41, Feb. 2023.
  • [3] H. S. Nogay,, T.C. Akinci., & M. Yilmaz, “Comparative experimental investigation and application of five classic pre-trained deep convolutional neural networks via transfer learning for diagnosis of breast cancer”. Advances in Science and Technology, 15(3), 2021.
  • [4] Y. Dogan and H. Y. Keles, ‘Iterative facial image inpainting based on an encoder-generator architecture’, Neural Comput. Appl., vol. 34, no. 12, pp. 10001–10021, Jun. 2022.
  • [5] M. Ataş, C. Özdemi̇r, İ. Ataş, B. Ak, and E. Özeroğlu, ‘Biometric identification using panoramic dental radiographic images withfew-shot learning’, TURK. J. OF ELECTR. ENG. COMPUT. SCI., vol. 30, no. 3, pp. 1115–1126, Jan. 2022.
  • [6] Y. Dogan and H. Yalim Keles, ‘Stability and diversity in generative adversarial networks’, in 2019 27th Signal Processing and Communications Applications Conference (SIU), Sivas, Turkey, 2019.
  • [7] C. Ozdemir, M. A. Gedik, and Y. Kaya, ‘Age estimation from left-hand radiographs with deep learning methods’, Trait. Du Signal, vol. 38, no. 6, pp. 1565–1574, Dec. 2021.
  • [8] M. Iman, K. Rasheed, and H. R. Arabnia, ‘A review of Deep Transfer Learning and recent advancements’, arXiv [cs.LG], 18-Jan-2022.
  • [9] M. D. Zeiler and R. Fergus, ‘Visualizing and understanding convolutional networks’, in Computer Vision – ECCV 2014, Cham: Springer International Publishing, 2014, pp. 818–833.
  • [10] G. Urban et al., ‘Do deep convolutional nets really need to be deep and convolutional?’, arXiv [stat.ML], 17-Mar-2016.
  • [11] D. Bau, B. Zhou, A. Khosla, A. Oliva, and A. Torralba, ‘Network dissection: Quantifying interpretability of deep visual representations’, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, 2017.
  • [12] H. Li, Z. Xu, G. Taylor, C. Studer, and T. Goldstein, ‘Visualizing the loss landscape of neural nets’, arXiv [cs.LG], 28-Dec-2017.
  • [13] H. Harapan et al., ‘Coronavirus disease 2019 (COVID-19): A literature review’, J. Infect. Public Health, vol. 13, no. 5, pp. 667–673, May 2020.
  • [14] J. Elliott, M. Whitaker, B. Bodinier, O. Eales, S. Riley, H. Ward, ... & P. Elliott, Predictive symptoms for COVID-19 in the community: REACT-1 study of over 1 million people. PLoS medicine, 18(9), e1003777., 2021.
  • [15] U. Jain, Effect of COVID-19 on the Organs. Cureus, 12(8)., 2020
  • [16] D. L. Weiner, V. Balasubramaniam, S. I. Shah, J. R. Javier, and Pediatric Policy Council, ‘COVID-19 impact on research, lessons learned from COVID-19 research, implications for pediatric research’, Pediatr. Res., vol. 88, no. 2, pp. 148–150, Aug. 2020.
  • [17] Ertuğrul, Ö. F., Emrullah, A. C. A. R., Öztekin, A., & Aldemir, E. (2021). Detection of Covid-19 from X-ray images via ensemble of features extraction methods employing randomized neural networks. European Journal of Technique (EJT), 11(2), 248-254.
  • [18] Kaya, Y., Yiner, Z., Kaya, M., & Kuncan, F. (2022). A new approach to COVID-19 detection from X-ray images using angle transformation with GoogleNet and LSTM. Measurement Science and Technology, 33(12), 124011
  • [19] Kaggle. COVID-19 Radiography Database. https://www.kaggle.com/datasets/tawsifurrahman/covid19-radiography-database. Accessed 18 January 2023
  • [20] M. F. Aslan, K. Sabanci, A. Durdu, and M. F. Unlersen, ‘COVID-19 diagnosis using state-of-the-art CNN architecture features and Bayesian Optimization’, Comput. Biol. Med., vol. 142, no. 105244, p. 105244, Mar. 2022.
  • [21] M. F. Sohan, A. Basalamah, and M. Solaiman, ‘COVID-19 detection using machine learning: a large scale assessment of x-ray and CT image datasets’, J. Electron. Imaging, vol. 31, no. 04, Mar. 2022.
  • [22] A. T. Sahlol, D. Yousri, A. A. Ewees, M. A. A. Al-Qaness, R. Damasevicius, and M. A. Elaziz, ‘COVID-19 image classification using deep features and fractional-order marine predators algorithm’, Sci. Rep., vol. 10, no. 1, p. 15364, Sep. 2020.
  • [23] J. Abdollahi and L. Mahmoudi, ‘An artificial intelligence system for detecting the types of the epidemic from X-rays : Artificial intelligence system for detecting the types of the epidemic from X-rays’, in 2022 27th International Computer Conference, Computer Society of Iran (CSICC), Tehran, Iran, Islamic Republic of, 2022.
  • [24] R. Abdrakhmanov, M. Altynbekov, A. Abu, A. Shomanov, D. Viderman, and M.-H. Lee, ‘Few-shot learning approach for COVID-19 detection from X-ray images’, in 2021 16th International Conference on Electronics Computer and Computation (ICECCO), Kaskelen, Kazakhstan, 2021.
Toplam 24 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Yapay Zeka
Bölüm Araştırma Makalesi
Yazarlar

Cüneyt Özdemir 0000-0002-9252-5888

Erken Görünüm Tarihi 25 Ocak 2024
Yayımlanma Tarihi 22 Aralık 2023
Yayımlandığı Sayı Yıl 2023 Cilt: 11 Sayı: 4

Kaynak Göster

APA Özdemir, C. (2023). Designing Effective Models for COVID-19 Diagnosis through Transfer Learning and Interlayer Visualization. Balkan Journal of Electrical and Computer Engineering, 11(4), 340-345. https://doi.org/10.17694/bajece.1274253

All articles published by BAJECE are licensed under the Creative Commons Attribution 4.0 International License. This permits anyone to copy, redistribute, remix, transmit and adapt the work provided the original work and source is appropriately cited.Creative Commons Lisansı