Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2024, Cilt: 13 Sayı: 3, 844 - 850, 26.09.2024
https://doi.org/10.17798/bitlisfen.1505636

Öz

Kaynakça

  • [1] R. Deepa, G. ALMahadin, and A. Sivasamy, “Early detection of skin cancer using AI: Deciphering dermatology images for melanoma detection,” AIP Adv., vol. 14, no. 4, 2024.
  • [2] I. H. Sarker, “Deep learning: a comprehensive overview on techniques, taxonomy, applications and research directions,” SN Comput. Sci., vol. 2, no. 6, p. 420, 2021.
  • [3] K. Al-Hammuri, F. Gebali, A. Kanan, and I. T. Chelvan, “Vision transformer architecture and applications in digital health: a tutorial and survey,” Vis. Comput. Ind. Biomed. Art, vol. 6, no. 1, p. 14, 2023.
  • [4] A. Sriwastawa and J. A. Arul Jothi, “Vision transformer and its variants for image classification in digital breast cancer histopathology: A comparative study,” Multimed. Tools Appl., vol. 83, no. 13, pp. 39731–39753, 2024.
  • [5] R. Kaur, H. GholamHosseini, R. Sinha, and M. Lindén, “Melanoma classification using a novel deep convolutional neural network with dermoscopic images,” Sensors, vol. 22, no. 3, p. 1134, 2022.
  • [6] P. Shobhit and N. Kumar, “Vision Transformer and Attention-Based Melanoma Disease Classification,” in 2023 4th International Conference on Communication, Computing and Industry 6.0 (C216), IEEE, 2023, pp. 1–6.
  • [7] M. A. Arshed, S. Mumtaz, M. Ibrahim, S. Ahmed, M. Tahir, and M. Shafi, “Multi-class skin cancer classification using vision transformer networks and convolutional neural network-based pre-trained models,” Information, vol. 14, no. 7, p. 415, 2023.
  • [8] S. Ghosh, S. Dhar, R. Yoddha, S. Kumar, A. K. Thakur, and N. D. Jana, “Melanoma Skin Cancer Detection Using Ensemble of Machine Learning Models Considering Deep Feature Embeddings,” Procedia Comput. Sci., vol. 235, pp. 3007–3015, 2024.
  • [9] S. R. Waheed et al., “Melanoma skin cancer classification based on CNN deep learning algorithms,” Malaysian J. Fundam. Appl. Sci., vol. 19, no. 3, pp. 299–305, 2023.
  • [10] Z. Chen et al., “Vision transformer adapter for dense predictions,” arXiv Prepr. arXiv2205.08534, 2022.
  • [11] A. Parvaiz, M. A. Khalid, R. Zafar, H. Ameer, M. Ali, and M. M. Fraz, “Vision transformers in medical computer vision—A contemplative retrospection,” Eng. Appl. Artif. Intell., vol. 122, p. 106126, 2023.
  • [12] X. Su et al., “Vitas: Vision transformer architecture search,” in European Conference on Computer Vision, Springer, 2022, pp. 139–157.
  • [13] A. Dosovitskiy et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv Prepr. arXiv2010.11929, 2020.
  • [14] G. Mesnil et al., “Unsupervised and transfer learning challenge: a deep learning approach,” in Proceedings of ICML Workshop on Unsupervised and Transfer Learning, JMLR Workshop and Conference Proceedings, 2012, pp. 97–110.
  • [15] S. Ghosal and K. Sarkar, “Rice Leaf Diseases Classification Using CNN With Transfer Learning,” in 2020 IEEE Calcutta Conference (CALCON), IEEE, 2020, pp. 230–236.
  • [16] A. Rahmouni, M. A. Sabri, A. Ennaji, and A. Aarab, “Skin Lesion Classification Based on Vision Transformer (ViT),” in The International Conference on Artificial Intelligence and Smart Environment, Springer, 2023, pp. 472–477.
  • [17] A. T. Karadeniz, Y. Çelik, and E. Başaran, “Classification of walnut varieties obtained from walnut leaf images by the recommended residual block based CNN model,” Eur. Food Res. Technol., pp. 1–12, 2022.
  • [18] E. Başaran, Z. Cömert, and Y. Celik, “Timpanik Membran Görüntü Özellikleri Kullanılarak Sınıflandırılması,” Fırat Üniversitesi Mühendislik Bilim. Derg., vol. 33, no. 2, pp. 441–453, 2021.
  • [19] S. M. Lin, P. Du, W. Huber, and W. A. Kibbe, “Model-based variance-stabilizing transformation for Illumina microarray data,” Nucleic Acids Res., vol. 36, no. 2, pp. e11–e11, 2008.
  • [20] T. M. Ghazal, S. Hussain, M. F. Khan, M. A. Khan, R. A. T. Said, and M. Ahmad, “Detection of benign and malignant tumors in skin empowered with transfer learning,” Comput. Intell. Neurosci., vol. 2022, no. 1, p. 4826892, 2022.
  • [21] A. Bassel, A. B. Abdulkareem, Z. A. A. Alyasseri, N. S. Sani, and H. J. Mohammed, “Automatic malignant and benign skin cancer classification using a hybrid deep learning approach,” Diagnostics, vol. 12, no. 10, p. 2472, 2022.
  • [22] G. H. Dagnaw, M. El Mouhtadi, and M. Mustapha, “Skin cancer classification using vision transformers and explainable artificial intelligence,” J. Med. Artif. Intell., vol. 7, 2024.

Automatic Classification of Melanoma Skin Cancer Images with Vision Transform Model and Transfer Learning

Yıl 2024, Cilt: 13 Sayı: 3, 844 - 850, 26.09.2024
https://doi.org/10.17798/bitlisfen.1505636

Öz

Melanoma is one of the most aggressive and lethal forms of skin cancer. Therefore, early diagnosis and correct diagnosis are very important for the health of the patient. Diagnostic procedures require human expertise, increasing the possibility of error. With developing technology, advances in deep learning models have become hope for the automatic detection of Melanoma skin cancer with computer systems. The Vision Transformer (ViT) model was developed by Google and has achieved very successful results in the field of classification.
In this study, the transfer learning method was applied with the ViT model using the melanoma skin cancer dataset taken from the Kaggle library and the performance of the model was evaluated. Before starting training, pre-processing was applied to the data set. The dataset consists of 9600 training and 1000 test images. Training and experimental testing of the model was carried out with Python language on the Colab platform. As a result of the experimental studies conducted on the test data set, it was seen that the model reached an accuracy rate of 93.5% and was competitive with existing models

Kaynakça

  • [1] R. Deepa, G. ALMahadin, and A. Sivasamy, “Early detection of skin cancer using AI: Deciphering dermatology images for melanoma detection,” AIP Adv., vol. 14, no. 4, 2024.
  • [2] I. H. Sarker, “Deep learning: a comprehensive overview on techniques, taxonomy, applications and research directions,” SN Comput. Sci., vol. 2, no. 6, p. 420, 2021.
  • [3] K. Al-Hammuri, F. Gebali, A. Kanan, and I. T. Chelvan, “Vision transformer architecture and applications in digital health: a tutorial and survey,” Vis. Comput. Ind. Biomed. Art, vol. 6, no. 1, p. 14, 2023.
  • [4] A. Sriwastawa and J. A. Arul Jothi, “Vision transformer and its variants for image classification in digital breast cancer histopathology: A comparative study,” Multimed. Tools Appl., vol. 83, no. 13, pp. 39731–39753, 2024.
  • [5] R. Kaur, H. GholamHosseini, R. Sinha, and M. Lindén, “Melanoma classification using a novel deep convolutional neural network with dermoscopic images,” Sensors, vol. 22, no. 3, p. 1134, 2022.
  • [6] P. Shobhit and N. Kumar, “Vision Transformer and Attention-Based Melanoma Disease Classification,” in 2023 4th International Conference on Communication, Computing and Industry 6.0 (C216), IEEE, 2023, pp. 1–6.
  • [7] M. A. Arshed, S. Mumtaz, M. Ibrahim, S. Ahmed, M. Tahir, and M. Shafi, “Multi-class skin cancer classification using vision transformer networks and convolutional neural network-based pre-trained models,” Information, vol. 14, no. 7, p. 415, 2023.
  • [8] S. Ghosh, S. Dhar, R. Yoddha, S. Kumar, A. K. Thakur, and N. D. Jana, “Melanoma Skin Cancer Detection Using Ensemble of Machine Learning Models Considering Deep Feature Embeddings,” Procedia Comput. Sci., vol. 235, pp. 3007–3015, 2024.
  • [9] S. R. Waheed et al., “Melanoma skin cancer classification based on CNN deep learning algorithms,” Malaysian J. Fundam. Appl. Sci., vol. 19, no. 3, pp. 299–305, 2023.
  • [10] Z. Chen et al., “Vision transformer adapter for dense predictions,” arXiv Prepr. arXiv2205.08534, 2022.
  • [11] A. Parvaiz, M. A. Khalid, R. Zafar, H. Ameer, M. Ali, and M. M. Fraz, “Vision transformers in medical computer vision—A contemplative retrospection,” Eng. Appl. Artif. Intell., vol. 122, p. 106126, 2023.
  • [12] X. Su et al., “Vitas: Vision transformer architecture search,” in European Conference on Computer Vision, Springer, 2022, pp. 139–157.
  • [13] A. Dosovitskiy et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv Prepr. arXiv2010.11929, 2020.
  • [14] G. Mesnil et al., “Unsupervised and transfer learning challenge: a deep learning approach,” in Proceedings of ICML Workshop on Unsupervised and Transfer Learning, JMLR Workshop and Conference Proceedings, 2012, pp. 97–110.
  • [15] S. Ghosal and K. Sarkar, “Rice Leaf Diseases Classification Using CNN With Transfer Learning,” in 2020 IEEE Calcutta Conference (CALCON), IEEE, 2020, pp. 230–236.
  • [16] A. Rahmouni, M. A. Sabri, A. Ennaji, and A. Aarab, “Skin Lesion Classification Based on Vision Transformer (ViT),” in The International Conference on Artificial Intelligence and Smart Environment, Springer, 2023, pp. 472–477.
  • [17] A. T. Karadeniz, Y. Çelik, and E. Başaran, “Classification of walnut varieties obtained from walnut leaf images by the recommended residual block based CNN model,” Eur. Food Res. Technol., pp. 1–12, 2022.
  • [18] E. Başaran, Z. Cömert, and Y. Celik, “Timpanik Membran Görüntü Özellikleri Kullanılarak Sınıflandırılması,” Fırat Üniversitesi Mühendislik Bilim. Derg., vol. 33, no. 2, pp. 441–453, 2021.
  • [19] S. M. Lin, P. Du, W. Huber, and W. A. Kibbe, “Model-based variance-stabilizing transformation for Illumina microarray data,” Nucleic Acids Res., vol. 36, no. 2, pp. e11–e11, 2008.
  • [20] T. M. Ghazal, S. Hussain, M. F. Khan, M. A. Khan, R. A. T. Said, and M. Ahmad, “Detection of benign and malignant tumors in skin empowered with transfer learning,” Comput. Intell. Neurosci., vol. 2022, no. 1, p. 4826892, 2022.
  • [21] A. Bassel, A. B. Abdulkareem, Z. A. A. Alyasseri, N. S. Sani, and H. J. Mohammed, “Automatic malignant and benign skin cancer classification using a hybrid deep learning approach,” Diagnostics, vol. 12, no. 10, p. 2472, 2022.
  • [22] G. H. Dagnaw, M. El Mouhtadi, and M. Mustapha, “Skin cancer classification using vision transformers and explainable artificial intelligence,” J. Med. Artif. Intell., vol. 7, 2024.
Toplam 22 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Yapay Zeka (Diğer)
Bölüm Araştırma Makalesi
Yazarlar

Alper Talha Karadeniz 0000-0003-4165-3932

Erken Görünüm Tarihi 20 Eylül 2024
Yayımlanma Tarihi 26 Eylül 2024
Gönderilme Tarihi 27 Haziran 2024
Kabul Tarihi 29 Temmuz 2024
Yayımlandığı Sayı Yıl 2024 Cilt: 13 Sayı: 3

Kaynak Göster

IEEE A. T. Karadeniz, “Automatic Classification of Melanoma Skin Cancer Images with Vision Transform Model and Transfer Learning”, Bitlis Eren Üniversitesi Fen Bilimleri Dergisi, c. 13, sy. 3, ss. 844–850, 2024, doi: 10.17798/bitlisfen.1505636.



Bitlis Eren Üniversitesi
Fen Bilimleri Dergisi Editörlüğü

Bitlis Eren Üniversitesi Lisansüstü Eğitim Enstitüsü        
Beş Minare Mah. Ahmet Eren Bulvarı, Merkez Kampüs, 13000 BİTLİS        
E-posta: fbe@beu.edu.tr