Araştırma Makalesi
BibTex RIS Kaynak Göster

An Application on the Use of Explainable Artificial Intelligence in the Healthcare

Yıl 2025, Cilt: 40 Sayı: 3, 707 - 723, 26.09.2025
https://doi.org/10.21605/cukurovaumfd.1729986

Öz

With the advent of big data, the potential to derive valuable insights has significantly increased. The widespread use of machine learning, especially deep learning, has made it possible to extract meaningful patterns from large datasets. However, end-users often encounter issues related to the transparency, interpretability, and reliability of these models. To address such concerns, Explainable Artificial Intelligence (XAI) has been developed to systematically clarify how models work and how their decisions can be understood and trusted.
This study explores the use and benefits of XAI in the healthcare domain. As an example, it presents an application that predicts the survival status (alive or deceased) of breast cancer patients based on various clinical parameters. Initially, classification was performed using machine learning algorithms. To interpret the results of the most successful model, the SHAP (SHapley Additive exPlanations) technique, a prominent XAI method, was applied. Findings indicate that XAI enhances understanding of model decisions and supports trustworthy AI applications in healthcare.

Kaynakça

  • 1. Kaplan, M., Çakar, F. & Bingöl, H. (2024). Sağlık alanında yapay zekanın kullanımı: Derleme. Muş Alparslan Üniversitesi Sağlık Bilimleri Dergisi, 4(3), 75-85.
  • 2. Adadi, A. & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160.
  • 3. Terzi, R. (2021). Sağlık sektöründe açıklanabilir yapay zekâ. Yapay zekâ ve büyük veri çalışmaları, Siber Güvenlik ve Mahremiyet, 157-175
  • 4. Kalasampath, K., Spoorthi, K.N., Sajeev, S., Kuppa, S.S., Ajay, K. & Maruthamuthu, A. (2025). A literature review on applications of explainable artificial intelligence (XAI). IEEE Access, 13, 41111-41140.
  • 5. Yildirim, H. (2024). Hastalıkların teşhis ve tedavisi için tıbbi görüntülemelerden yapay zeka kullanımı ile daha hassas ve daha hızlı sonuç elde etme konusundaki gelişmeler: FDA (Amerikan Gıda ve İlaç Dairesi) tarafından onaylanmış teknolojilerle geliştirilebilecek yeni uygulamalar. Academic Social Resources Journal, 6(24), 558-568.
  • 6. Alkhanbouli, R., Abdulla Almadhaani, H.M., Alhosani, F. & Simsekler, M.C.E. (2025). The role of explainable artificial intelligence in disease prediction: A systematic literature review and future research directions. BMC Medical Informatics and Decision Making, 25(1), 110.
  • 7. Albahri, A.S., Duhaim, A.M., Fadhel, M.A., Alnoor, A., Bager, N.S., Alzubaidi, L., Albahri, O.S., Alamoodi, A.H., Bai, J., Salhi, A., Santamaria, J., Ouyang, C., Gupta, A., Gu, Y. & Deveci, M. (2023). A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion. Information Fusion, 96, 156-191.
  • 8. Akin, K.D., Gurkan, C., Budak, A. & Karataş, H. (2022). Açıklanabilir yapay zeka destekli evrişimsel sinir ağları kullanılarak maymun çiçeği deri lezyonunun sınıflandırılması. European Journal of Science and Technology, 40, 106-110.
  • 9. Nafisah, S.I. & Muhammad, G. (2024). Tuberculosis detection in chest radiograph using convolutional neural network architecture and explainable artificial intelligence. Neural Computing and Applications, 36(1), 111-131.
  • 10. Orman, A., Köse, U. & Yiğit, T. (2021). Brain tumor detection via explainable convolutional neural networks: Açıklanabilir evrişimsel sinir ağları ile beyin tümörü tespiti. El-Cezeri Journal of Science and Engineering, 8(3), 1323-1337.
  • 11. Viswan, V., Shaffi, N., Mahmud, M., Subramanian, K. & Hajamohideen, F. (2024). Explainable artificial intelligence in Alzheimer’s disease classification: A systematic review. Cognitive Computation, 16(1), 1-44.
  • 12. Wani, N.A., Kumar, R. & Bedi, J. (2024). DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence. Computer Methods and Programs in Biomedicine, 243, 107879.
  • 13. Zhang, Y., Weng, Y. & Lund, J. (2022). Applications of explainable artificial intelligence in diagnosis and surgery. Diagnostics, 12(2), 237.
  • 14. Yang, C.C. (2022). Explainable artificial intelligence for predictive modeling in healthcare. Journal of Healthcare Informatics Research, 6(2), 228-239.
  • 15. Tjoa, E. & Guan, C. (2021). A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Transactions on Neural Networks and Learning Systems, 32(11), 4793-4813.
  • 16. Sun, Q., Akman, A. & Schuller, B.W. (2025). Explainable artificial intelligence for medical applications: A review. ACM Transactions on Computing for Healthcare, 6(2), 1-31.
  • 17. van Lent, M., Fisher, W. & Mancuso, M. (2004). An explainable artificial intelligence system for small-unit tactical behavior. In Proceedings of the 16th Conference on Innovative Applications of Artificial Intelligence (IAAI’04) (900-907). AAAI Press.
  • 18. Barredo Arrieta, A., Diaz-Rodriguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R. & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
  • 19. Belle, V. & Papantonis, I. (2021). Principles and practice of explainable machine learning. Frontiers in Big Data, 4, 688969.
  • 20. Kalasampath, K., Spoorthi, K.N., Sajeev, S., Kuppa, S.S., Ajay, K. & Maruthamuthu, A. (2025). A literature review on applications of explainable artificial intelligence (XAI). IEEE Access, 13, 41111-41140.
  • 21. Rawal, A., McCoy, J., Rawat, D.B., Sadler, B.M. & Amant, R.S. (2022). Recent advances in trustworthy explainable artificial intelligence: Status, challenges, and perspectives. IEEE Transactions on Artificial Intelligence, 3(6), 852-866.
  • 22. Nguyen, H.T.T. & Cao, H.Q. (n.d.). Evaluation of explainable artificial intelligence: SHAP, LIME, and CAM. [Manuscript in preparation].
  • 23. Holzinger, A., Saranti, A., Molnar, C., Biecek, P. & Samek, W. (2022). Explainable AI methods-A brief overview. In A. Holzinger et al. (Eds.), xxAI - Beyond Explainable AI (13-38). Springer.
  • 24. Salih, A.M., Raisi, Z., Galazzo, I.B., Radeva, P., Petersen, S.E., Lekadir, K. & Menegaz, G. (2025). A perspective on explainable artificial intelligence methods: SHAP and LIME. Advanced Intelligent Systems, 7(1), 2400304.
  • 25. Shapley, L. (2020). A value for n-person games. In H. W. Kuhn (Ed.), Classics in game theory (69-79). Princeton University Press.
  • 26. Meng, Y., Yang, N., Qian, Z. & Zhang, G. (2021). What makes an online review more helpful: An interpretation framework using XGBoost and SHAP values. Journal of Theoretical and Applied Electronic Commerce Research, 16(3), 3.
  • 27. Teng, J. (2019). SEER breast cancer data. IEEE DataPort.
  • 28. Potdar, K., Pardawala, T. & Pai, C. (2017). A comparative study of categorical variable encoding techniques for neural network classifiers. International Journal of Computer Applications, 175(4), 7-9.
  • 29. Ahsan, M.M., Mahmud, M.A.P., Saha, P.K., Gupta, K.D. & Siddique, Z. (2021). Effect of data scaling methods on machine learning algorithms and model performance. Technologies, 9(3), 3.
  • 30. Hua, Y., Stead, T.S., George, A. & Ganti, L. (2025). Clinical risk prediction with logistic regression: Best practices, validation techniques, and applications in medical research. Academic Medicine & Surgery, 1(1), 1-13.
  • 31. Patel, H.H. & Prajapati, P. (2018). Study and analysis of decision tree based classification algorithms. International Journal of Computer Sciences and Engineering, 6(10), 74-78.
  • 32. Masetic, Z. & Subasi, A. (2016). Congestive heart failure detection using random forest classifier. Computer Methods and Programs in Biomedicine, 130, 54-64.
  • 33. Bentéjac, C., Csörgő, A. & Martínez-Muñoz, G. (2021). A comparative analysis of gradient boosting algorithms. Artificial Intelligence Review, 54(3), 1937-1967.
  • 34. Wasule, V. & Sonar, P. (2017). Classification of brain MRI using SVM and KNN classifier. In 2017 Third International Conference on Sensing, Signal Processing and Security (ICSSS) (218-223). IEEE.
  • 35. Blagus, R. & Lusa, L. (2013). SMOTE for high-dimensional class-imbalanced data. BMC Bioinformatics, 14(1), 106.
  • 36. Anand, A., Pugalenthi, G., Fogel, G.B. & Suganthan, P.N. (2010). An approach for classification of highly imbalanced data using weighting and undersampling. Amino Acids, 39(5), 1385-1391.
  • 37. Rakha, E., Toss, M. & Quinn, C. (2022). Specific cell differentiation in breast cancer: A basis for histological classification. Journal of Clinical Pathology, 75(2), 76-84.
  • 38. Aizer, A.A., Chen, M.-H., Mccarthy, E., Mendu, M.L., Koo, S., Wilhite, T.J., Graham, P.L., Choueiri, T.K., Hoffman, K.E., Martin, N.E., Hu, J.C. & Nguyen, P.L. (2013). Marital status and survival in patients with cancer. Journal of Clinical Oncology, 31(31), 3869-3876.
  • 39. Ghavidel, A. & Pazos, P. (2025). Machine learning (ML) techniques to predict breast cancer in imbalanced datasets: A systematic review. Journal of Cancer Survivorship, 19(1), 270-294.
  • 40. Moncada-Torres, A., van Maaren, M.C., Hendriks, M.P., Siesling, S. & Geleijnse, G. (2021). Explainable machine learning can outperform Cox regression predictions and provide insights in breast cancer survival. Scientific Reports, 11(1), 1-13.
  • 41. Park, S.W., Park, Y.L., Lee, E.G., Chae, H., Park, P., Choi, D.W., Choi, Y.H., Hwang, J., Ahn, S., Kim, K., Kim, W.J., Kong, S.-Y., Jung, S.-Y. & Kim, H.J. (2024). Mortality prediction modeling for patients with breast cancer based on explainable machine learning. Cancers, 16(22), 3799.

Sağlık Alanında Açıklanabilir Yapay Zekâ Kullanımı Üzerine Bir Uygulama

Yıl 2025, Cilt: 40 Sayı: 3, 707 - 723, 26.09.2025
https://doi.org/10.21605/cukurovaumfd.1729986

Öz

Büyük verilerin hayatımıza girmesi ile bu verilerle elde edilebilecek bilgiler de artmıştır. Özellikle derin öğrenmeye yönelik olmak üzere makine öğreniminin yaygın kullanımı sayesinde büyük veri kümelerinden anlamlı çıkarımlar yapabilmek mümkün hale gelmiştir. Ancak son kullanıcılar, oluşturulan modellerin şeffaflık, yorumlanabilirlik ve güvenilirlik gibi bazı eksiklikleri ile karşı karşıya kalmaktadır. Bu eksikliklerin giderilmesi için geliştirilen “Açıklanabilir Yapay Zekâ (AYZ)”, sistematik bir yaklaşım ile oluşturulan modelin ve alınan kararların kullanıcılar tarafından anlaşılmasını ve modellerin güvenilirliğinin nasıl değerlendirilmesi gerektiğini açıklayabilmektedir.
Bu çalışmada sağlık alanında AYZ kullanımı ve faydaları hakkında bilgi verilmektedir. Ayrıca çalışma, sağlık alanında AYZ kullanımına örnek olması açısından, meme kanseri hastalarının çeşitli parametrelere bağlı olarak sağ/ölü durumlarının öngörülmesine olanak sağlayan bir uygulama içermektedir. Öncelikle, makine öğrenmesi algoritmaları kullanılarak sınıflandırma çalışması yapılmıştır. Ardından en başarılı sınıflandırma modelinin sonuçlarının açıklanabilirliği için bir AYZ yöntemi olan SHAP tekniğine başvurulmuştur. Sonuçlar, AYZ kullanımının modelin karar verme sürecinin nedenlerini daha iyi anlamamıza yardımcı olduğunu göstermektedir.

Kaynakça

  • 1. Kaplan, M., Çakar, F. & Bingöl, H. (2024). Sağlık alanında yapay zekanın kullanımı: Derleme. Muş Alparslan Üniversitesi Sağlık Bilimleri Dergisi, 4(3), 75-85.
  • 2. Adadi, A. & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160.
  • 3. Terzi, R. (2021). Sağlık sektöründe açıklanabilir yapay zekâ. Yapay zekâ ve büyük veri çalışmaları, Siber Güvenlik ve Mahremiyet, 157-175
  • 4. Kalasampath, K., Spoorthi, K.N., Sajeev, S., Kuppa, S.S., Ajay, K. & Maruthamuthu, A. (2025). A literature review on applications of explainable artificial intelligence (XAI). IEEE Access, 13, 41111-41140.
  • 5. Yildirim, H. (2024). Hastalıkların teşhis ve tedavisi için tıbbi görüntülemelerden yapay zeka kullanımı ile daha hassas ve daha hızlı sonuç elde etme konusundaki gelişmeler: FDA (Amerikan Gıda ve İlaç Dairesi) tarafından onaylanmış teknolojilerle geliştirilebilecek yeni uygulamalar. Academic Social Resources Journal, 6(24), 558-568.
  • 6. Alkhanbouli, R., Abdulla Almadhaani, H.M., Alhosani, F. & Simsekler, M.C.E. (2025). The role of explainable artificial intelligence in disease prediction: A systematic literature review and future research directions. BMC Medical Informatics and Decision Making, 25(1), 110.
  • 7. Albahri, A.S., Duhaim, A.M., Fadhel, M.A., Alnoor, A., Bager, N.S., Alzubaidi, L., Albahri, O.S., Alamoodi, A.H., Bai, J., Salhi, A., Santamaria, J., Ouyang, C., Gupta, A., Gu, Y. & Deveci, M. (2023). A systematic review of trustworthy and explainable artificial intelligence in healthcare: Assessment of quality, bias risk, and data fusion. Information Fusion, 96, 156-191.
  • 8. Akin, K.D., Gurkan, C., Budak, A. & Karataş, H. (2022). Açıklanabilir yapay zeka destekli evrişimsel sinir ağları kullanılarak maymun çiçeği deri lezyonunun sınıflandırılması. European Journal of Science and Technology, 40, 106-110.
  • 9. Nafisah, S.I. & Muhammad, G. (2024). Tuberculosis detection in chest radiograph using convolutional neural network architecture and explainable artificial intelligence. Neural Computing and Applications, 36(1), 111-131.
  • 10. Orman, A., Köse, U. & Yiğit, T. (2021). Brain tumor detection via explainable convolutional neural networks: Açıklanabilir evrişimsel sinir ağları ile beyin tümörü tespiti. El-Cezeri Journal of Science and Engineering, 8(3), 1323-1337.
  • 11. Viswan, V., Shaffi, N., Mahmud, M., Subramanian, K. & Hajamohideen, F. (2024). Explainable artificial intelligence in Alzheimer’s disease classification: A systematic review. Cognitive Computation, 16(1), 1-44.
  • 12. Wani, N.A., Kumar, R. & Bedi, J. (2024). DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence. Computer Methods and Programs in Biomedicine, 243, 107879.
  • 13. Zhang, Y., Weng, Y. & Lund, J. (2022). Applications of explainable artificial intelligence in diagnosis and surgery. Diagnostics, 12(2), 237.
  • 14. Yang, C.C. (2022). Explainable artificial intelligence for predictive modeling in healthcare. Journal of Healthcare Informatics Research, 6(2), 228-239.
  • 15. Tjoa, E. & Guan, C. (2021). A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Transactions on Neural Networks and Learning Systems, 32(11), 4793-4813.
  • 16. Sun, Q., Akman, A. & Schuller, B.W. (2025). Explainable artificial intelligence for medical applications: A review. ACM Transactions on Computing for Healthcare, 6(2), 1-31.
  • 17. van Lent, M., Fisher, W. & Mancuso, M. (2004). An explainable artificial intelligence system for small-unit tactical behavior. In Proceedings of the 16th Conference on Innovative Applications of Artificial Intelligence (IAAI’04) (900-907). AAAI Press.
  • 18. Barredo Arrieta, A., Diaz-Rodriguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R. & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
  • 19. Belle, V. & Papantonis, I. (2021). Principles and practice of explainable machine learning. Frontiers in Big Data, 4, 688969.
  • 20. Kalasampath, K., Spoorthi, K.N., Sajeev, S., Kuppa, S.S., Ajay, K. & Maruthamuthu, A. (2025). A literature review on applications of explainable artificial intelligence (XAI). IEEE Access, 13, 41111-41140.
  • 21. Rawal, A., McCoy, J., Rawat, D.B., Sadler, B.M. & Amant, R.S. (2022). Recent advances in trustworthy explainable artificial intelligence: Status, challenges, and perspectives. IEEE Transactions on Artificial Intelligence, 3(6), 852-866.
  • 22. Nguyen, H.T.T. & Cao, H.Q. (n.d.). Evaluation of explainable artificial intelligence: SHAP, LIME, and CAM. [Manuscript in preparation].
  • 23. Holzinger, A., Saranti, A., Molnar, C., Biecek, P. & Samek, W. (2022). Explainable AI methods-A brief overview. In A. Holzinger et al. (Eds.), xxAI - Beyond Explainable AI (13-38). Springer.
  • 24. Salih, A.M., Raisi, Z., Galazzo, I.B., Radeva, P., Petersen, S.E., Lekadir, K. & Menegaz, G. (2025). A perspective on explainable artificial intelligence methods: SHAP and LIME. Advanced Intelligent Systems, 7(1), 2400304.
  • 25. Shapley, L. (2020). A value for n-person games. In H. W. Kuhn (Ed.), Classics in game theory (69-79). Princeton University Press.
  • 26. Meng, Y., Yang, N., Qian, Z. & Zhang, G. (2021). What makes an online review more helpful: An interpretation framework using XGBoost and SHAP values. Journal of Theoretical and Applied Electronic Commerce Research, 16(3), 3.
  • 27. Teng, J. (2019). SEER breast cancer data. IEEE DataPort.
  • 28. Potdar, K., Pardawala, T. & Pai, C. (2017). A comparative study of categorical variable encoding techniques for neural network classifiers. International Journal of Computer Applications, 175(4), 7-9.
  • 29. Ahsan, M.M., Mahmud, M.A.P., Saha, P.K., Gupta, K.D. & Siddique, Z. (2021). Effect of data scaling methods on machine learning algorithms and model performance. Technologies, 9(3), 3.
  • 30. Hua, Y., Stead, T.S., George, A. & Ganti, L. (2025). Clinical risk prediction with logistic regression: Best practices, validation techniques, and applications in medical research. Academic Medicine & Surgery, 1(1), 1-13.
  • 31. Patel, H.H. & Prajapati, P. (2018). Study and analysis of decision tree based classification algorithms. International Journal of Computer Sciences and Engineering, 6(10), 74-78.
  • 32. Masetic, Z. & Subasi, A. (2016). Congestive heart failure detection using random forest classifier. Computer Methods and Programs in Biomedicine, 130, 54-64.
  • 33. Bentéjac, C., Csörgő, A. & Martínez-Muñoz, G. (2021). A comparative analysis of gradient boosting algorithms. Artificial Intelligence Review, 54(3), 1937-1967.
  • 34. Wasule, V. & Sonar, P. (2017). Classification of brain MRI using SVM and KNN classifier. In 2017 Third International Conference on Sensing, Signal Processing and Security (ICSSS) (218-223). IEEE.
  • 35. Blagus, R. & Lusa, L. (2013). SMOTE for high-dimensional class-imbalanced data. BMC Bioinformatics, 14(1), 106.
  • 36. Anand, A., Pugalenthi, G., Fogel, G.B. & Suganthan, P.N. (2010). An approach for classification of highly imbalanced data using weighting and undersampling. Amino Acids, 39(5), 1385-1391.
  • 37. Rakha, E., Toss, M. & Quinn, C. (2022). Specific cell differentiation in breast cancer: A basis for histological classification. Journal of Clinical Pathology, 75(2), 76-84.
  • 38. Aizer, A.A., Chen, M.-H., Mccarthy, E., Mendu, M.L., Koo, S., Wilhite, T.J., Graham, P.L., Choueiri, T.K., Hoffman, K.E., Martin, N.E., Hu, J.C. & Nguyen, P.L. (2013). Marital status and survival in patients with cancer. Journal of Clinical Oncology, 31(31), 3869-3876.
  • 39. Ghavidel, A. & Pazos, P. (2025). Machine learning (ML) techniques to predict breast cancer in imbalanced datasets: A systematic review. Journal of Cancer Survivorship, 19(1), 270-294.
  • 40. Moncada-Torres, A., van Maaren, M.C., Hendriks, M.P., Siesling, S. & Geleijnse, G. (2021). Explainable machine learning can outperform Cox regression predictions and provide insights in breast cancer survival. Scientific Reports, 11(1), 1-13.
  • 41. Park, S.W., Park, Y.L., Lee, E.G., Chae, H., Park, P., Choi, D.W., Choi, Y.H., Hwang, J., Ahn, S., Kim, K., Kim, W.J., Kong, S.-Y., Jung, S.-Y. & Kim, H.J. (2024). Mortality prediction modeling for patients with breast cancer based on explainable machine learning. Cancers, 16(22), 3799.
Toplam 41 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Yapay Zeka (Diğer)
Bölüm Makaleler
Yazarlar

Gülhan Toğa 0000-0001-8835-1769

Yayımlanma Tarihi 26 Eylül 2025
Gönderilme Tarihi 30 Haziran 2025
Kabul Tarihi 25 Eylül 2025
Yayımlandığı Sayı Yıl 2025 Cilt: 40 Sayı: 3

Kaynak Göster

APA Toğa, G. (2025). Sağlık Alanında Açıklanabilir Yapay Zekâ Kullanımı Üzerine Bir Uygulama. Çukurova Üniversitesi Mühendislik Fakültesi Dergisi, 40(3), 707-723. https://doi.org/10.21605/cukurovaumfd.1729986