Research Article
BibTex RIS Cite

YAPAY ZEKA İLE CİLT KANSERİ TEŞHİSİ: BİR KULLANICI ARAYÜZ TASARIMI

Year 2025, Volume: 13 Issue: 4, 1091 - 1105, 30.12.2025
https://doi.org/10.21923/jesd.1735294

Abstract

Bu makalede, yapay zeka tabanlı bir cilt kanseri teşhis sistemi geliştirilmiştir. HAM10000 veri kümesi kullanılarak yedi farklı cilt lezyonu sınıflandırılmıştır. Veri kümesindeki sınıf dengesizliği, veri artırma teknikleri, sınıf ağırlığı ve focal loss fonksiyonu ile giderilmiştir. EfficientNetB0, ResNet50 ve MobileNetV2 gibi transfer öğrenme modelleri eğitilmiş, performansları doğruluk, F1 skoru, ROC eğrileri ve karışıklık matrisi ile değerlendirilmiştir. Elde edilen sonuçlara göre ResNet50 modeli en iyi başarıyı göstermiştir. Ayrıca, Grad-CAM yöntemiyle modelin karar mekanizması görselleştirilmiş ve Gradio tabanlı bir web arayüzü geliştirilmiştir. Çalışmanın amacı, kullanıcı dostu, hızlı ve açıklanabilir bir yapay zeka tabanlı tanı desteği sağlamaktır.

Thanks

Açık erişimli HAM10000 veri kümesini sağlayarak bilimsel çalışmalara katkıda bulunan araştırmacılara teşekkür ederiz.

References

  • Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., Lu, L., Yuille, A. L., & Zhou, Y. (2021). TransUNet: Transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306.
  • Chiu, T. M., Li, Y.-C., Chi, I.-C., & Tseng, M.-H. (2025). AI-driven enhancement of skin cancer diagnosis: A two-stage multi-modal learning algorithm with interpretable visual explanations. Computers in Biology and Medicine, 146, 105650. https://doi.org/10.1016/j.compbiomed.2022.105650
  • Chiu, T. M., Li, Y.-C., Chi, I.-C., & Tseng, M.-H. (2025). AI-driven enhancement of skin cancer diagnosis: Multi-modal learning, interpretable vision transformers, and mobile applications. Computers in Biology and Medicine, 146, 105650. https://doi.org/10.1016/j.compbiomed.2022.105650
  • Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., ... & Houlsby, N. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
  • Gamage, D., et al. (2025). Enhanced early skin cancer detection through fusion of CNN and ViT models. Scientific Reports, 15(1), 18570. https://doi.org/10.1038/s41598-025-18570-1
  • Google Colab. (2025). Google Colaboratory: A cloud platform for Python notebooks. Erişim adresi: https://colab.research.google.com
  • Gradio Team. (2025). Gradio: Create machine learning interfaces. Erişim adresi: https://gradio.app
  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 770–778). https://doi.org/10.1109/CVPR.2016.90
  • Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., ... & Adam, H. (2017). MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. https://arxiv.org/abs/1704.04861
  • Hugging Face Team. (2023). Deploying machine learning models with Hugging Face Spaces and Streamlit for interactive applications. Hugging Face Documentation. https://huggingface.co/docs/spaces
  • Lilhore, U. K., et al. (2025). SkinEHDLF: A hybrid deep learning approach for accurate skin cancer classification in complex systems. Scientific Reports, 15(1), 98205. https://doi.org/10.1038/s41598-025-98205-7
  • Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2017). Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) (pp. 2980–2988). https://doi.org/10.1109/ICCV.2017.324
  • Lungu-Stan, V.-C., Cercel, D.-C., & Pop, F. (2023). SkinDistilViT: Lightweight Vision Transformer for Skin Lesion Classification. Lecture Notes in Computer Science, 13466, 311–320. https://doi.org/10.1007/978-3-031-44207-0_23
  • Powers, D. M. W. (2011). Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. Journal of Machine Learning Technologies, 2(1), 37–63.
  • Qin, Z., Liu, Z., Zhu, P., & Lu, H. (2022). Medical image data augmentation via generative adversarial networks: A review. Journal of Imaging, 8(9), 238. https://doi.org/10.3390/jimaging8090238
  • Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) (pp. 618–626). https://doi.org/10.1109/ICCV.2017.74
  • Shafiq, M., et al. (2024). A novel Skin lesion prediction and classification technique: ViT-GradCAM. Journal of Dermatology & Dermatologic Surgery, 28(3), 157–163. https://doi.org/10.1016/j.jdds.2024.03.004
  • Sudharshan, P. J., Mahadevan, S., & Kalra, S. (2020). Dice-Focal loss for skin lesion segmentation using deep learning. Biomedical Signal Processing and Control, 62, 102133. https://doi.org/10.1016/j.bspc.2020.102133
  • Tan, M., & Le, Q. V. (2019). EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning (ICML) (pp. 6105–6114). https://proceedings.mlr.press/v97/tan19a.html
  • Tschandl, P., Rosendahl, C., & Kittler, H. (2018). The HAM10000 dataset: A large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific Data, 5, Article 180161. https://doi.org/10.1038/sdata.2018.161
  • Zhang, X., et al. (2025). DermViT: Diagnosis-Guided Vision Transformer for Robust and Efficient Skin Lesion Classification. Journal of Imaging Science and Technology, 12(4), 421. https://doi.org/10.3390/jimaging12040421
  • Zhang, Y., Liu, Y., Wang, L., & Li, X. (2023). GS-TransUNet: Global–local skip connection transformer U-Net for skin lesion segmentation. Computer Methods and Programs in Biomedicine, 230, 107428. https://doi.org/10.1016/j.cmpb.2023.107428

ARTIFICIAL INTELLIGENCE-BASED SKIN CANCER DIAGNOSIS: A USER INTERFACE DESIGN

Year 2025, Volume: 13 Issue: 4, 1091 - 1105, 30.12.2025
https://doi.org/10.21923/jesd.1735294

Abstract

In this paper, an artificial intelligence-based skin cancer diagnosis system was developed. Seven different skin lesions were classified using the HAM10000 dataset. The class imbalance problem was addressed by applying data augmentation, class weights, and the focal loss function. Transfer learning models such as EfficientNetB0, ResNet50, and MobileNetV2 were trained and evaluated using accuracy, F1-score, ROC curves, and confusion matrix. The results showed that the ResNet50 model achieved the highest performance. Moreover, the Grad-CAM method was used to visualize the decision-making process, and a Gradio-based web interface was developed. The aim of this study is to provide a user-friendly, fast, and explainable AI-based diagnostic support.

Thanks

Açık erişimli HAM10000 veri kümesini sağlayarak bilimsel çalışmalara katkıda bulunan araştırmacılara teşekkür ederiz.

References

  • Chen, J., Lu, Y., Yu, Q., Luo, X., Adeli, E., Wang, Y., Lu, L., Yuille, A. L., & Zhou, Y. (2021). TransUNet: Transformers make strong encoders for medical image segmentation. arXiv preprint arXiv:2102.04306.
  • Chiu, T. M., Li, Y.-C., Chi, I.-C., & Tseng, M.-H. (2025). AI-driven enhancement of skin cancer diagnosis: A two-stage multi-modal learning algorithm with interpretable visual explanations. Computers in Biology and Medicine, 146, 105650. https://doi.org/10.1016/j.compbiomed.2022.105650
  • Chiu, T. M., Li, Y.-C., Chi, I.-C., & Tseng, M.-H. (2025). AI-driven enhancement of skin cancer diagnosis: Multi-modal learning, interpretable vision transformers, and mobile applications. Computers in Biology and Medicine, 146, 105650. https://doi.org/10.1016/j.compbiomed.2022.105650
  • Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., ... & Houlsby, N. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
  • Gamage, D., et al. (2025). Enhanced early skin cancer detection through fusion of CNN and ViT models. Scientific Reports, 15(1), 18570. https://doi.org/10.1038/s41598-025-18570-1
  • Google Colab. (2025). Google Colaboratory: A cloud platform for Python notebooks. Erişim adresi: https://colab.research.google.com
  • Gradio Team. (2025). Gradio: Create machine learning interfaces. Erişim adresi: https://gradio.app
  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 770–778). https://doi.org/10.1109/CVPR.2016.90
  • Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., ... & Adam, H. (2017). MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861. https://arxiv.org/abs/1704.04861
  • Hugging Face Team. (2023). Deploying machine learning models with Hugging Face Spaces and Streamlit for interactive applications. Hugging Face Documentation. https://huggingface.co/docs/spaces
  • Lilhore, U. K., et al. (2025). SkinEHDLF: A hybrid deep learning approach for accurate skin cancer classification in complex systems. Scientific Reports, 15(1), 98205. https://doi.org/10.1038/s41598-025-98205-7
  • Lin, T. Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2017). Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) (pp. 2980–2988). https://doi.org/10.1109/ICCV.2017.324
  • Lungu-Stan, V.-C., Cercel, D.-C., & Pop, F. (2023). SkinDistilViT: Lightweight Vision Transformer for Skin Lesion Classification. Lecture Notes in Computer Science, 13466, 311–320. https://doi.org/10.1007/978-3-031-44207-0_23
  • Powers, D. M. W. (2011). Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. Journal of Machine Learning Technologies, 2(1), 37–63.
  • Qin, Z., Liu, Z., Zhu, P., & Lu, H. (2022). Medical image data augmentation via generative adversarial networks: A review. Journal of Imaging, 8(9), 238. https://doi.org/10.3390/jimaging8090238
  • Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) (pp. 618–626). https://doi.org/10.1109/ICCV.2017.74
  • Shafiq, M., et al. (2024). A novel Skin lesion prediction and classification technique: ViT-GradCAM. Journal of Dermatology & Dermatologic Surgery, 28(3), 157–163. https://doi.org/10.1016/j.jdds.2024.03.004
  • Sudharshan, P. J., Mahadevan, S., & Kalra, S. (2020). Dice-Focal loss for skin lesion segmentation using deep learning. Biomedical Signal Processing and Control, 62, 102133. https://doi.org/10.1016/j.bspc.2020.102133
  • Tan, M., & Le, Q. V. (2019). EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning (ICML) (pp. 6105–6114). https://proceedings.mlr.press/v97/tan19a.html
  • Tschandl, P., Rosendahl, C., & Kittler, H. (2018). The HAM10000 dataset: A large collection of multi-source dermatoscopic images of common pigmented skin lesions. Scientific Data, 5, Article 180161. https://doi.org/10.1038/sdata.2018.161
  • Zhang, X., et al. (2025). DermViT: Diagnosis-Guided Vision Transformer for Robust and Efficient Skin Lesion Classification. Journal of Imaging Science and Technology, 12(4), 421. https://doi.org/10.3390/jimaging12040421
  • Zhang, Y., Liu, Y., Wang, L., & Li, X. (2023). GS-TransUNet: Global–local skip connection transformer U-Net for skin lesion segmentation. Computer Methods and Programs in Biomedicine, 230, 107428. https://doi.org/10.1016/j.cmpb.2023.107428
There are 22 citations in total.

Details

Primary Language Turkish
Subjects Software Testing, Verification and Validation
Journal Section Research Article
Authors

Mehmet Ali Kirencik 0009-0002-0615-0715

İsmail Kağan Kavukluca 0009-0001-2058-2322

Erdoğan Aydın 0000-0002-5198-0980

Submission Date July 4, 2025
Acceptance Date October 17, 2025
Publication Date December 30, 2025
Published in Issue Year 2025 Volume: 13 Issue: 4

Cite

APA Kirencik, M. A., Kavukluca, İ. K., & Aydın, E. (2025). YAPAY ZEKA İLE CİLT KANSERİ TEŞHİSİ: BİR KULLANICI ARAYÜZ TASARIMI. Mühendislik Bilimleri Ve Tasarım Dergisi, 13(4), 1091-1105. https://doi.org/10.21923/jesd.1735294