Research Article
BibTex RIS Cite

MLP Temelli Yapay Zeka Modellerinde Çıktıları Etkileyen Giriş Veri Seti Niteliklerinin Model Mimarisine Göre Davranışlarının Araştırılması

Year 2025, Volume: 7 Issue: 1, 29 - 37, 30.06.2025
https://doi.org/10.59940/jismar.1577691

Abstract

Yapay zekanın yaygınlaşması ile birlikte açıklanabilirlik, yorumlanabilirlik, şeffaflık gibi konular önemli hale gelmiştir. Özellikle sağlık, savunma sanyii, güvenlik, hukuk gibi alanlarında. . Bu çalışmada; ileri beslemeli geri yayılımlı çok katmanlı Yapay Sinir Ağı (MLP : Multi Layer Perceptron) yapay zeka modellerinde giriş veri seti nitelik değerinin model çıkışına olan etkilerinin model mimarisi ile olan ilişkisi araştırılmıştır. Model giriş veri özniteliklerinin model tahminine katkıları SHAP yöntemi ile ölçülmüştür. MLP mimarisi değiştikçe giriş veri seti nitelik değerlerinin model çıkışına katkı oranları sıralaması da değişmektedir. Öznitelik etki sıralamasındaki değişimin çoğunlukla katkı düzeyleri birbirine görece yakın olan öznitelik değerleri için geçerli olduğu, etki oranı diğer özniteliklerden biraz farklı olan özniteliklerin etki sıralamasının MLP mimarisi ile çok fazla değişmediği gözlemlenmiştir. Bu sonuçlara göre model mimarisinin Açıklanabilir Yapay Zeka'da da belli bir oranda etkili olduğu, modelin doğruluk değeri ile özniteliklerin önem oranları arasında bir ilişki olmadığı sonucuna varılabilir.

References

  • R. A. S. Deliloğlu and A. Ç. Pehlivanlı, “Hybrid Explainable Artifical Intelligence Design and LIME Application”, European Journal of Science and Technology, No. 27, pp. 228-236, November 202 Copyright © 2021 EJOSAT.
  • P. Gohel., Singh P., and Mohanty M., “Explainable AI: current status and future directions”, IEEE Access (2021), 10.48550/arXiv.2107.07045, arXiv:2107.07045, DOI: https://doi.org/10.48550/arXiv.2107.07045.
  • W. Saeed and C. Omlin, “Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities, Knowledge-Based Systems”, Volume 263, 110273, ISSN 0950-7051, 2023, https://doi.org/10.1016/j.knosys.2023.110273.
  • A. Chander, R. Srinivasan, S. Chelian, J. Wang, and K. Uchino, “Working with beliefs: AI transparency in the enterprise”. In IUI Workshops (Vol. 1), 2018, March.
  • A. Barredo Arrieta, N. Díaz-Rodríguez, J. D. Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S.Gil- Lopez, D. Molina, R. Benjamins, R. Chatil, F. Herrera, “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI”, Information Fusion, Volume 58, Pages 82-115, ISSN 1566-2535, (2020), https://doi.org/10.1016/j.inffus.2019.12.012.
  • E. Tjoa and C Guan, “A Survey on Explainable Artificial Intelligence (XAI): towards Medical XA”, Journal of Latex Class Fıles, Vol. 14, No. 8, August 2015.
  • F. K. Došilović, M. Brčić and H. Hlupić, “Explainable artificial intelligence: A survey”, 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 2018, pp. 0210-0215, doi: 10.23919/MIPRO.2018.8400040.
  • Y. Izza, A. Ignatiev, J. Marques-Silva, ”On Explaining Decision Trees”, arXiv:2010.11034 [cs.LG], https://doi.org/10.48550/arXiv.2010.11034.
  • M. A. Hearst, S. T. Dumais, E. Osuna, J. Platt and B. Scholkopf, "Support vector machines," in IEEE Intelligent Systems and their Applications, vol. 13, no. 4, pp. 18-28, July-Aug. 1998, doi: 10.1109/5254.708428.
  • N. Cristianini, E. Ricci, “Support Vector Machines”, In: Kao, MY. (eds) Encyclopedia of Algorithms. Springer, Boston, MA, 2008, https://doi.org/10.1007/978-0-387-30162-4_415.
  • X. Zou, Y. Hu, Z. Tian and K. Shen, "Logistic Regression Model Optimization and Case Analysis," 2019 IEEE 7th International Conference on Computer Science and Network Technology (ICCSNT), Dalian, China, 2019, pp. 135-139, doi: 10.1109/ICCSNT47585.2019.8962457.
  • F. J. Yang, "An Implementation of Naive Bayes Classifier," 2018 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 2018, pp. 301-306, doi: 10.1109/CSCI46756.2018.00065.
  • S. Norozpour and M. Safaei, “An Overview on Game Theory and Its Application”, IOP Conf. Series: Materials Science and Engineering 993 (2020) 012114, ICMECE 2020, doi:10.1088/1757- 899X/993/1/012114.
  • S. M. Lundberg and S: Lee, “A Unified Approach to Interpreting Model Predictions”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
  • SHAP documents, https://shap.readthedocs.io/en/latest/index.html, (Access Date: June 4 2024).
  • UCI, “Machine Learning Repository”, https://archive.ics.uci.edu/ (Access Date : June 05, 2023).
  • M. Polatgil, “Investigation of The Effects of Data Scaling and Imputation of Missing Data Approaches on The Success of Machine Learning Methods”, Duzce University Journal of Science and Technology, 11 78-88, 2023, DOI:10.29130/dubited.948564.
  • T. Emmanuel, T. Maupong, D. Mpoeleng, T. Semong, B. Mphago, O. Tabona, “A survey on missing data in machine learning”, J Big Data 8, 140, (2021), https://doi.org/10.1186/s40537-021-00516-9.
  • S. Qinbao and M. J. Shepperd, “Missing Data Imputation Techniques”, IJBIDM. 2. 261-291. 10.1504/IJBIDM.2007.015485.
  • C. Molna, “Interpretable Machine Learning A Guide for Making Black Box Models Explainable”, Independently published (February 28, 2022), Page: 168-188, ISBN-13: 979-8411463330.
  • M. T. Ribeiro, S. Singh and C. Guestrin, “Why W. H. A. Wan Zunaidi, R. R. Saedudin, Z. Ali Shah, S. Kasim, C. Sen Seah, and M. Abdurohman, “Performances Analysis of Heart Disease Dataset using Different Data Mining Classifications”, Int. J. Adv. Sci. Eng. Inf. Technol., vol. 8, no. 6, pp. 2677–2682, Dec. 2018. Should I Trust You? Explaining the Predictions of Any Classifier”, KDD 2016 San Francisco, CA, USA, Publication rights licensed to ACM. ISBN 978-1-4503-4232-2/16/08., 2016, DOI: http://dx.doi.org/10.1145/2939672.2939778.
  • H. W. Kuhn, “Classics in Game Theory”, Princeton University Press, 1997, ISBN-13: 978-0691011929.
  • P. Nagaraj, V. Muneeswaran, A: Dharanidharan, K. Balananthanan, M. Arunkumar, C. Rajkumar, “A Prediction and Recommendation System for Diabetes Mellitus using XAI-based Lime Explainer”, 2022 International Conference on Sustainable Computing and Data Communication Systems (ICSCDS), IEEE Xplore: 27 April 2022, DOI: 10.1109/ICSCDS53736.2022.9760847.
  • https://archive.ics.uci.edu/dataset/17/breast+cancer+wisconsin+diagnostic (Access Date: April 3 2025)
  • H. Doğan, A. B. Tatar, A. K. Tanyıldızı, B. Taşar, “Breast Cancer Diagnosis with Machine Learning Techniques”, Bitlis Eren University Journal Of Science, ISSN: 2147-3129/e-ISSN: 2147-3188 Volume: 11 No: 2 Page: 594-603 Year: 2022 DOI: 10.17798/bitlisfen.1065685.
  • V. Asha, B. Saju, S. Mathew, A. M. V, Y. Swapna and S. P. Sreeja, “Breast Cancer classification using Neural networks,” 2023 International Conference on Intelligent and Innovative Technologies in Computing, Electrical and Electronics (IITCEE), Bengaluru, India, 2023, pp. 900-905, doi: 10.1109/IITCEE57236.2023.10091020.
  • A. Çifci, M. İlkuçar, İ. Kırbaş, “Explainable Artificial Intelligence for Biomedical Applications/What Makes Survival of Heart Failure Patients? Prediction by the Iterative Learning Approach and Detailed Factor Analysis with the SHAP Algorithm”. Publisher: River Publishers, 2023. Chapter: 6, ISBN: 9788770228497 (Hardback), 9788770228848 (Ebook).
  • A.A. Ahmad, H. Polat, “Prediction of Heart Disease Based on Machine Learning Using Jellyfish Optimization Algorithm. Diagnostics”, (Basel). 2023 Jul 17; 13(14):2392. doi: 10.3390/diagnostics13142392.
  • Y. Lin, “Prediction and Analysis of Heart Disease Using Machine Learning,” 2021 IEEE International Conference on Robotics, Automation and Artificial Intelligence (RAAI), Hong Kong, Hong Kong, 2021, pp. 53-58, doi: 10.1109/RAAI52226.2021.9507928.

Analysis of the Behavior of The Input Data Set Attributes Affecting the Outputs in MLP Based Artificial Intelligence Models According to the Model

Year 2025, Volume: 7 Issue: 1, 29 - 37, 30.06.2025
https://doi.org/10.59940/jismar.1577691

Abstract

With the widespread use of artificial intelligence, explainability, interpretability, and transparency are very important issues. Especially in the health, defence, security, and law domains. In this study, the same data sets are used for multilayer artificial neural network (MLP) different architectures, and the effects of data set attributes on MLP model output are analysed. The contributions of the model input data attributes to the model prediction were measured with the SHAP method. For the data sets, as the MLP architecture changes, the ranking of the importance levels of the input data set attribute values also changes. It was observed that the change in the attribute influence ranking is mostly valid for attribute values whose contribution levels are relatively close to each other, and the influence ranking of the attributes whose influence ratio is slightly different from other attributes does not change much with the MLP architecture. According to these results, it can be concluded that the model architecture is also effective to a certain extent in Explainable Artificial Intelligence and that there is no relationship between the model's accuracy value and the importance rates of the attributes.

References

  • R. A. S. Deliloğlu and A. Ç. Pehlivanlı, “Hybrid Explainable Artifical Intelligence Design and LIME Application”, European Journal of Science and Technology, No. 27, pp. 228-236, November 202 Copyright © 2021 EJOSAT.
  • P. Gohel., Singh P., and Mohanty M., “Explainable AI: current status and future directions”, IEEE Access (2021), 10.48550/arXiv.2107.07045, arXiv:2107.07045, DOI: https://doi.org/10.48550/arXiv.2107.07045.
  • W. Saeed and C. Omlin, “Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities, Knowledge-Based Systems”, Volume 263, 110273, ISSN 0950-7051, 2023, https://doi.org/10.1016/j.knosys.2023.110273.
  • A. Chander, R. Srinivasan, S. Chelian, J. Wang, and K. Uchino, “Working with beliefs: AI transparency in the enterprise”. In IUI Workshops (Vol. 1), 2018, March.
  • A. Barredo Arrieta, N. Díaz-Rodríguez, J. D. Ser, A. Bennetot, S. Tabik, A. Barbado, S. Garcia, S.Gil- Lopez, D. Molina, R. Benjamins, R. Chatil, F. Herrera, “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI”, Information Fusion, Volume 58, Pages 82-115, ISSN 1566-2535, (2020), https://doi.org/10.1016/j.inffus.2019.12.012.
  • E. Tjoa and C Guan, “A Survey on Explainable Artificial Intelligence (XAI): towards Medical XA”, Journal of Latex Class Fıles, Vol. 14, No. 8, August 2015.
  • F. K. Došilović, M. Brčić and H. Hlupić, “Explainable artificial intelligence: A survey”, 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 2018, pp. 0210-0215, doi: 10.23919/MIPRO.2018.8400040.
  • Y. Izza, A. Ignatiev, J. Marques-Silva, ”On Explaining Decision Trees”, arXiv:2010.11034 [cs.LG], https://doi.org/10.48550/arXiv.2010.11034.
  • M. A. Hearst, S. T. Dumais, E. Osuna, J. Platt and B. Scholkopf, "Support vector machines," in IEEE Intelligent Systems and their Applications, vol. 13, no. 4, pp. 18-28, July-Aug. 1998, doi: 10.1109/5254.708428.
  • N. Cristianini, E. Ricci, “Support Vector Machines”, In: Kao, MY. (eds) Encyclopedia of Algorithms. Springer, Boston, MA, 2008, https://doi.org/10.1007/978-0-387-30162-4_415.
  • X. Zou, Y. Hu, Z. Tian and K. Shen, "Logistic Regression Model Optimization and Case Analysis," 2019 IEEE 7th International Conference on Computer Science and Network Technology (ICCSNT), Dalian, China, 2019, pp. 135-139, doi: 10.1109/ICCSNT47585.2019.8962457.
  • F. J. Yang, "An Implementation of Naive Bayes Classifier," 2018 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 2018, pp. 301-306, doi: 10.1109/CSCI46756.2018.00065.
  • S. Norozpour and M. Safaei, “An Overview on Game Theory and Its Application”, IOP Conf. Series: Materials Science and Engineering 993 (2020) 012114, ICMECE 2020, doi:10.1088/1757- 899X/993/1/012114.
  • S. M. Lundberg and S: Lee, “A Unified Approach to Interpreting Model Predictions”, 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
  • SHAP documents, https://shap.readthedocs.io/en/latest/index.html, (Access Date: June 4 2024).
  • UCI, “Machine Learning Repository”, https://archive.ics.uci.edu/ (Access Date : June 05, 2023).
  • M. Polatgil, “Investigation of The Effects of Data Scaling and Imputation of Missing Data Approaches on The Success of Machine Learning Methods”, Duzce University Journal of Science and Technology, 11 78-88, 2023, DOI:10.29130/dubited.948564.
  • T. Emmanuel, T. Maupong, D. Mpoeleng, T. Semong, B. Mphago, O. Tabona, “A survey on missing data in machine learning”, J Big Data 8, 140, (2021), https://doi.org/10.1186/s40537-021-00516-9.
  • S. Qinbao and M. J. Shepperd, “Missing Data Imputation Techniques”, IJBIDM. 2. 261-291. 10.1504/IJBIDM.2007.015485.
  • C. Molna, “Interpretable Machine Learning A Guide for Making Black Box Models Explainable”, Independently published (February 28, 2022), Page: 168-188, ISBN-13: 979-8411463330.
  • M. T. Ribeiro, S. Singh and C. Guestrin, “Why W. H. A. Wan Zunaidi, R. R. Saedudin, Z. Ali Shah, S. Kasim, C. Sen Seah, and M. Abdurohman, “Performances Analysis of Heart Disease Dataset using Different Data Mining Classifications”, Int. J. Adv. Sci. Eng. Inf. Technol., vol. 8, no. 6, pp. 2677–2682, Dec. 2018. Should I Trust You? Explaining the Predictions of Any Classifier”, KDD 2016 San Francisco, CA, USA, Publication rights licensed to ACM. ISBN 978-1-4503-4232-2/16/08., 2016, DOI: http://dx.doi.org/10.1145/2939672.2939778.
  • H. W. Kuhn, “Classics in Game Theory”, Princeton University Press, 1997, ISBN-13: 978-0691011929.
  • P. Nagaraj, V. Muneeswaran, A: Dharanidharan, K. Balananthanan, M. Arunkumar, C. Rajkumar, “A Prediction and Recommendation System for Diabetes Mellitus using XAI-based Lime Explainer”, 2022 International Conference on Sustainable Computing and Data Communication Systems (ICSCDS), IEEE Xplore: 27 April 2022, DOI: 10.1109/ICSCDS53736.2022.9760847.
  • https://archive.ics.uci.edu/dataset/17/breast+cancer+wisconsin+diagnostic (Access Date: April 3 2025)
  • H. Doğan, A. B. Tatar, A. K. Tanyıldızı, B. Taşar, “Breast Cancer Diagnosis with Machine Learning Techniques”, Bitlis Eren University Journal Of Science, ISSN: 2147-3129/e-ISSN: 2147-3188 Volume: 11 No: 2 Page: 594-603 Year: 2022 DOI: 10.17798/bitlisfen.1065685.
  • V. Asha, B. Saju, S. Mathew, A. M. V, Y. Swapna and S. P. Sreeja, “Breast Cancer classification using Neural networks,” 2023 International Conference on Intelligent and Innovative Technologies in Computing, Electrical and Electronics (IITCEE), Bengaluru, India, 2023, pp. 900-905, doi: 10.1109/IITCEE57236.2023.10091020.
  • A. Çifci, M. İlkuçar, İ. Kırbaş, “Explainable Artificial Intelligence for Biomedical Applications/What Makes Survival of Heart Failure Patients? Prediction by the Iterative Learning Approach and Detailed Factor Analysis with the SHAP Algorithm”. Publisher: River Publishers, 2023. Chapter: 6, ISBN: 9788770228497 (Hardback), 9788770228848 (Ebook).
  • A.A. Ahmad, H. Polat, “Prediction of Heart Disease Based on Machine Learning Using Jellyfish Optimization Algorithm. Diagnostics”, (Basel). 2023 Jul 17; 13(14):2392. doi: 10.3390/diagnostics13142392.
  • Y. Lin, “Prediction and Analysis of Heart Disease Using Machine Learning,” 2021 IEEE International Conference on Robotics, Automation and Artificial Intelligence (RAAI), Hong Kong, Hong Kong, 2021, pp. 53-58, doi: 10.1109/RAAI52226.2021.9507928.
There are 29 citations in total.

Details

Primary Language English
Subjects Management Information Systems
Journal Section Vol 7 - Issue 1 - 30 June 2025 [en] [en]
Authors

Muhammer İlkuçar 0000-0001-9908-7633

Publication Date June 30, 2025
Submission Date November 1, 2024
Acceptance Date June 23, 2025
Published in Issue Year 2025 Volume: 7 Issue: 1

Cite

APA İlkuçar, M. (2025). Analysis of the Behavior of The Input Data Set Attributes Affecting the Outputs in MLP Based Artificial Intelligence Models According to the Model. Journal of Information Systems and Management Research, 7(1), 29-37. https://doi.org/10.59940/jismar.1577691