Research Article
BibTex RIS Cite

A comparative analysis on the reliability of interpretable machine learning

Year 2024, Volume: 30 Issue: 4, 494 - 508, 30.08.2024

Abstract

There is often a trade-off between accuracy and interpretability in Machine Learning (ML) models. As the model becomes more complex, generally the accuracy increases and the interpretability decreases. Interpretable Machine Learning (IML) methods have emerged to provide the interpretability of complex ML models while maintaining accuracy. Thus, accuracy remains constant while determining feature importance. In this study, we aim to compare agnostic IML methods including SHAP and ELI5 with the intrinsic IML methods and Feature Selection (FS) methods in terms of the similarity of attribute selection. Also, we compare agnostic IML models (SHAP, LIME, and ELI5) among each other in terms of similarity of local attribute selection. Experimental studies have been conducted on both general and private datasets to predict company default. According to the obtained results, this study confirms the reliability of agnostic IML methods by demonstrating similarities of up to 86% in the selection of attributes compared to intrinsic IML methods and FS methods. Additionally, certain agnostic IML methods can interpret models for local instances. The findings indicate that agnostic IML models can be applied in complex ML models to offer both global and local interpretability while maintaining high accuracy.

References

  • [1] Morocho-Cayamcela ME, Lee H, Lim W. “Machine learning for 5g/b5g mobile and wireless communications: Potential, limitations, and future directions”. IEEE Access, 7, 137184-137206, 2019.
  • [2] Baryannis G, Dani S, Antoniou G. “Predicting supply chain risks using machine learning: The trade-off between performance and interpretability”. Future Generation Computer Systems, 101, 993-1004, 2019.
  • [3] Mori T, Uchihira N. “Balancing the trade-off between accuracy and interpretability in software defect prediction”. Empirical Software Engineering, 24(2), 779-825, 2019.
  • [4] Doshi-Velez F, Kim B. “Towards a rigorous science of interpretable machine learning”. arXiv, 2017. https://arxiv.org/pdf/1702.08608.pdf.
  • [5] Lundberg S, Lee SI. “A unified approach to interpreting model predictions”. arXiv, 2017. https://arxiv.org/pdf/1705.07874.pdf.
  • [6] Fan A, Jernite Y, Perez E, Grangier D, Weston J, Auli M. “ELI5: Long form question answering”. arXiv, 2019. https://arxiv.org/pdf/1907.09190.pdf.
  • [7] Ribeiro MT, Singh S, Guestrin C. “” Why Should İ Trust You?” explaining the predictions of any classifier”. in Proceedings of the 22nd ACM SIGKDD International Conference On Knowledge Discovery and Data Mining, San Francisco, California, USA, 13-17 August 2016.
  • [8] Zhao L, Dong X. “An industrial internet of things feature selection method based on potential entropy evaluation criteria”. IEEE Access, 6, 4608-4617, 2018.
  • [9] Jiang T, Gradus JL, Rosellini AJ. “Supervised machine learning: a brief primer”. Behavior Therapy, 51(5), 675-687, 2020.
  • [10] Jovic´ A, Brkic´ K, Bogunovic´ N. “A review of feature selection methods with applications”. in 2015 38th International Convention on Information and Communication Technology, Electronics and Microelec- Tronics (MIPRO), Opajita, Croatia, 25-29 May 2015.
  • [11] Alelyani S, Tang J, Liu H. “Feature Selection for Clustering: A Review”. Editors: Aggarwal C, Reddy R. Data Clustering: Algorithms and Applications, 2013.
  • [12] Dy JG, Brodley CE. “Feature selection for unsupervised learning”. Journal of Machine Learning Research, 5, 845-889, 2004.
  • [13] Ang JC, Mirzal A, Haron H, and Hamed HNA. “Supervised, unsupervised, and semi-supervised feature selection: a review on gene selection”. IEEE/ACM Transactions on Computational Biology and Bioin-Formatics, 13(5), 971-989, 2015.
  • [14] Yıldırım S, Yıldız T. “A comparative analysis of text classification for Turkish language”. Pamukkale University Journal of Engineering Sciences, 24(5), 879-886, 2018.
  • [15] Behura A. “The Cluster Analysis and Feature Selection: Perspective of Machine Learning and Image Processing”. Editors: Satpathy R, Choudhury T, Satpathy S, Mohanty S N, Zhang X. Data Analytics in Bioinformatics: A Machine Learning Perspective, 249-280, John Wiley & Sons, 2021.
  • [16] Li X, Yi P, Wei W, Jiang Y, Tian L. “LNNLS-KH: A Feature Selection Method for Network Intrusion Detection”. Security and Communication Networks, 2021, 1-22, 2021.
  • [17] Selvalakshmi B, Subramaniam M. “Intelligent ontology based semantic information retrieval using feature selection and classification”. Cluster Computing, 22(5), 12871-12881, 2019.
  • [18] Du X, Li W, Ruan S, Li L. “CUS-heterogeneous ensemble- based financial distress prediction for imbalanced dataset with ensemble feature selection”. Applied Soft Computing, 97, 1-13, 2020.
  • [19] Yousef M, Kumar A, Bakir-Gungor B. “Application of biological domain knowledge based feature selection on gene expression data”. Entropy, 23(1), 1-15, 2021.
  • [20] Akalın F, Yumusak N. “Classification of acute leukaemias with a hybrid use of feature selection algorithms and deep learning-based architectures”. Pamukkale University Journal of Engineering Sciences, 29(3), 256-263, 2023.
  • [21] Mohammadi S, Mirvaziri H, Ghazizadeh-Ahsaee M, Karimipour H. “Cyber intrusion detection by combined feature selection algorithm”. Journal of information security and applications, 44, 80-88, 2019.
  • [22] Sharif M, Khan MA, Iqbal Z, Azam MF, Lali MIU, Javed MY. “Detection and classification of citrus diseases in agricul- ture based on optimized weighted segmentation and feature selection”. Computers and Electronics in Agriculture, 150, 220-234, 2018.
  • [23] Rodriguez-Galiano VF, Luque-Espinar JA, Chica-Olmo M, Mendes MP. “Feature selection approaches for predictive modelling of groundwater nitrate pollution: An evaluation of filters, embedded and wrapper methods”. Science of the Total Environment, 624, 661- 672, 2018.
  • [24] Xue B, Zhang M, Browne WN, Yao X. “A survey on evolutionary computation approaches to feature selection”. IEEE Transactions on Evolutionary Computation, 20(4), 606-626, 2015.
  • [25] Sharma S, Kaur P. “A comprehensive analysis of nature-inspired meta-heuristic techniques for feature selection problem”. Archives of Computational Methods in Engineering, 28(3), 1103-1127, 2021.
  • [26] Kumar V, Minz S. “Feature selection: a literature review”. SmartCR, 4(3), 211-229, 2014.
  • [27] Bolon-Canedo V, Sanchez-Marono N, Alonso-Betanzos A, “A review of feature selection methods on synthetic data”. Knowledge and Information Systems, 34(3), 483-519, 2013.
  • [28] Li H, Li CJ, Wu XJ, Sun J. “Statistics-based wrapper for feature selection: An implementation on financial distress identification with support vector machine”. Applied Soft Computing, 19, 57- 67, 2014.
  • [29] Cui L, Bai L, Wang Y, Jin X, Hancock ER. “Internet financing credit risk evaluation using multiple structural interacting elastic net feature selection”. Pattern Recognition, 114, 1-13, 2021.
  • [30] Jadhav S, He H, Jenkins K. “Information gain directed genetic algorithm wrapper feature selection for credit rating”. Applied Soft Computing, 69, 541-553, 2018.
  • [31] Liang D, Tsai CF, Wu HT. “The effect of feature selection on financial distress prediction”. Knowledge-Based Systems, 73, 289-297, 2015.
  • [32] Sivasankar E, Selvi C, Mahalakshmi S. “Rough set-based feature selection for credit risk prediction using weight-adjusted boosting en-semble method”. Soft Computing, 24(6), 3975-3988, 2020.
  • [33] Zhang X, Hu Y, Xie K, Wang S, Ngai E, Liu M. “A causal feature selection algorithm for stock prediction modeling”. Neurocomputing, 142, 48-59, 2014.
  • [34] Lin WC, Lu YH, Tsai CF. “Feature selection in single and ensemble learning-based bankruptcy prediction models”. Expert Systems, 36(1), 1-8, 2019.
  • [35] Adadi A, Berrada M. “Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)”. IEEE Access, 6, 52138-52160, 2018.
  • [36] Yıldırım-Okay F, Yıldırım M, and Ozdemir S. “Interpretable machine learning: A case study of healthcare”. in 2021 International Symposium on Networks, Computers and Communications (ISNCC). IEEE, Dubai, United Arab Emirates, 31 October-2 November 2021.
  • [37] ElShawi R, Sherif Y, Al-Mallah M, Sakr S. “Interpretability in healthcare: A comparative study of local machine learning interpretability techniques”. Computational Intelligence, 37(4), 1633-1650, 2021.
  • [38] Zablocki E, Ben-Younes H, Pe´rez P, Cord M. “Explainability of Vision-Based Autonomous Driving Systems: Review and Challenges”. arXiv, 2021. https://arxiv.org/pdf/2101.05307.pdf.
  • [39] Demajo LM, Vella V, Dingli A. “Explainable AI for interpretable credit scoring”. arXiv, 2020. https://arxiv.org/pdf/2012.03749.pdf.
  • [40] Bussmann N, Giudici P, Marinelli D, Papenbrock J. “Explainable AI in fintech risk management”. Frontiers in Artificial Intelligence, 3(26), 1-5, 2020.
  • [41] Ito T, Sakaji H, Izumi K, Tsubouchi K, Yamashita T. “GINN: Gradient interpretable neural networks for visualizing financial texts”. International Journal of Data Science and Analytics, 9(4), 431-445, 2020.
  • [42] Cong LW, Tang K, Wang J, Zhang Y. “AlphaPortfolio: Direct construction through reinforcement learning and interpretable AI”. Available at SSRN, 2021. https://ssrn.com/abstract=3554486
  • [43] Grath RM, Costabello L, Van CL, Sweeney P, Kamiab F, Shen Z, Lecue F. “Interpretable credit application predictions with counterfactual explanations”. arXiv, 2018. https://arxiv.org/pdf/1811.05245.pdf.
  • [44] Ghosh I, Dragan P. “Can financial stress be anticipated and explained? Uncovering the hidden pattern using EEMD-LSTM, EEMD-prophet, and XAI methodologies”. Expert Systems with Applications, 9, 4169-4193, 2023.
  • [45] Babaei G, Giudici P. “Which SME is worth an investment? An explainable machine learning approach. An explainable machine learning approach”. Available at SSRN, 2021. https://ssrn.com/abstract=3810618
  • [46] Tra KL, Le HA, Nguyen TH, Nguyen, DT. “Explainable machine learning for financial distress prediction: evidence from Vietnam”. Data, 7(11), 1-12, 2022.
  • [47] Ariza-Garzón MJ, Arroyo J, Caparrini A, & Segovia-Vargas MJ. “Explainability of a machine learning granting scoring model in peer-to-peer lending”. IEEE Access, 8, 64873-64890, 2020.
  • [48] Misheva BH, Osterrieder J, Hirsa A, Kulkarni O, Lin, SF. “Explainable AI in credit risk management”. arXiv, 2021. https://arxiv.org/pdf/2103.00949.pdf.
  • [49] Bracke P, Datta A, Jung C, Sen, S. “Machine Learning Explainability in Finance: An Application to Default Risk Analysis”. Available at SSRN, 2019. https://ssrn.com/abstract=3435104
  • [50] Chandrashekar G, Sahin F. “A survey on feature selection methods”. Computers & Electrical Engineering, 40(1), 16-28, 2014.
  • [51] Li J, Cheng K, Wang S, Morstatter F, Trevino RP, Tang J, Liu H. “Feature selection: A data perspective”. ACM Computing Surveys (CSUR), 50(6), 1-45, 2017.
  • [52] Saeys Y, Inza I, Larranaga P. “A review of feature selection techniques in bioinformatics”. Bioinformatics, 23(19), 2507- 2517, 2007.
  • [53] Carvalho DV, Pereira EM, Cardoso JS. “Machine learning interpretability: A survey on methods and metrics”. Electronics, 8(8), 1-34, 2019.
  • [54] Vellido A, Martin-Guerrero JD, Lisboa PJ. “Making machine learning models interpretable.” in European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), Bruge, Belgium, 25-27 April 2012.
  • [55] Molnar C. “Interpretable Machine Learning: A Guide for Making Black Box Models explainable”. https://christophm.github.io/interpretable-ml-book/ (14.05.2024)
  • [56] Ruping S. Learning Interpretable Models. Ph.D. Thesis, University of Dortmund, Dortmund, Germany, 2006.
  • [57] Robnik-Sikonja M, Bohanec M. Perturbation-Based Explanations of Prediction Models. Editors: Zhou J, Chen F. Human and Machine Learning, 159-175, Springer, Cham, 2018.
  • [58] Mothilal RK, Sharma A, Tan C. “Explaining machine learning classifiers through diverse counterfactual explanations”. in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, NY, USA, 27-30 January 2020.
  • [59] Zikeba M, Tomczak SK, Tomczak JM. “Ensemble boosted trees with synthetic features generation in application to bankruptcy prediction”. Expert Systems with Applications, 58, 93-101, 2016.
  • [60] Real R, Vargas JM. “The probabilistic basis of jaccard’s index of similarity”. Systematic Biology, 45(3), 380-385, 1996.
  • [61] Tata S, Patel JM. “Estimating the selectivity of tf-idf based cosine similarity predicates”. ACM Sigmod Record, 36(2), 7-12, 2007.

Yorumlanabilir makine öğrenmesinin güvenilirliği üzerine karşılaştırmalı bir analiz

Year 2024, Volume: 30 Issue: 4, 494 - 508, 30.08.2024

Abstract

Makine Öğrenmesi (ML) modellerinde genellikle doğruluk ve yorumlanabilirlik arasında bir denge vardır. Model daha karmaşık hale geldikçe, genellikle doğruluk artar ve yorumlanabilirlik azalır. Yorumlanabilir Makine Öğrenimi (IML) yöntemleri karmaşık ML modellerinin doğruluğunu korurken yorumlanabilirliğini sağlamak için ortaya çıkmıştır. Böylece, öznitelik önemi belirlenirken doğruluk sabit kalır. Bu çalışmada, SHAP ve ELI5 gibi agnostik IML yöntemleri ile içsel IML yöntemleri ve özellik seçimi (FS) yöntemlerinin öznitelik seçimi benzerliği açısından karşılaştırılmasını amaçlıyoruz. Ayrıca agnostik IML modellerini (SHAP, LIME ve ELI5) yerel öznitelik seçiminin benzerliği açısından kendi aralarında karşılaştırıyoruz. Şirket temerrüdünü tahmin etmek için hem genel hem de özel veri kümeleri üzerinde deneysel çalışmalar yapılmıştır. Elde edilen sonuçlara göre, bu çalışma öznitelik seçiminde içsel IML yöntemleri ve FS yöntemlerine kıyasla %86’ya kadar benzerlikler göstererek agnostik IML yöntemlerinin güvenilirliğini doğrulamaktadır. Ek olarak, bazı agnostik IML yöntemleri, modelleri yerel örnekler için de yorumlayabilmektedir. Sonuçlar, agnostik IML modellerinin, yüksek doğruluğu korurken genel ve yerel yorumlanabilirlik sağlamak için karmaşık ML modellerinde uygulanabileceğini göstermektedir.

References

  • [1] Morocho-Cayamcela ME, Lee H, Lim W. “Machine learning for 5g/b5g mobile and wireless communications: Potential, limitations, and future directions”. IEEE Access, 7, 137184-137206, 2019.
  • [2] Baryannis G, Dani S, Antoniou G. “Predicting supply chain risks using machine learning: The trade-off between performance and interpretability”. Future Generation Computer Systems, 101, 993-1004, 2019.
  • [3] Mori T, Uchihira N. “Balancing the trade-off between accuracy and interpretability in software defect prediction”. Empirical Software Engineering, 24(2), 779-825, 2019.
  • [4] Doshi-Velez F, Kim B. “Towards a rigorous science of interpretable machine learning”. arXiv, 2017. https://arxiv.org/pdf/1702.08608.pdf.
  • [5] Lundberg S, Lee SI. “A unified approach to interpreting model predictions”. arXiv, 2017. https://arxiv.org/pdf/1705.07874.pdf.
  • [6] Fan A, Jernite Y, Perez E, Grangier D, Weston J, Auli M. “ELI5: Long form question answering”. arXiv, 2019. https://arxiv.org/pdf/1907.09190.pdf.
  • [7] Ribeiro MT, Singh S, Guestrin C. “” Why Should İ Trust You?” explaining the predictions of any classifier”. in Proceedings of the 22nd ACM SIGKDD International Conference On Knowledge Discovery and Data Mining, San Francisco, California, USA, 13-17 August 2016.
  • [8] Zhao L, Dong X. “An industrial internet of things feature selection method based on potential entropy evaluation criteria”. IEEE Access, 6, 4608-4617, 2018.
  • [9] Jiang T, Gradus JL, Rosellini AJ. “Supervised machine learning: a brief primer”. Behavior Therapy, 51(5), 675-687, 2020.
  • [10] Jovic´ A, Brkic´ K, Bogunovic´ N. “A review of feature selection methods with applications”. in 2015 38th International Convention on Information and Communication Technology, Electronics and Microelec- Tronics (MIPRO), Opajita, Croatia, 25-29 May 2015.
  • [11] Alelyani S, Tang J, Liu H. “Feature Selection for Clustering: A Review”. Editors: Aggarwal C, Reddy R. Data Clustering: Algorithms and Applications, 2013.
  • [12] Dy JG, Brodley CE. “Feature selection for unsupervised learning”. Journal of Machine Learning Research, 5, 845-889, 2004.
  • [13] Ang JC, Mirzal A, Haron H, and Hamed HNA. “Supervised, unsupervised, and semi-supervised feature selection: a review on gene selection”. IEEE/ACM Transactions on Computational Biology and Bioin-Formatics, 13(5), 971-989, 2015.
  • [14] Yıldırım S, Yıldız T. “A comparative analysis of text classification for Turkish language”. Pamukkale University Journal of Engineering Sciences, 24(5), 879-886, 2018.
  • [15] Behura A. “The Cluster Analysis and Feature Selection: Perspective of Machine Learning and Image Processing”. Editors: Satpathy R, Choudhury T, Satpathy S, Mohanty S N, Zhang X. Data Analytics in Bioinformatics: A Machine Learning Perspective, 249-280, John Wiley & Sons, 2021.
  • [16] Li X, Yi P, Wei W, Jiang Y, Tian L. “LNNLS-KH: A Feature Selection Method for Network Intrusion Detection”. Security and Communication Networks, 2021, 1-22, 2021.
  • [17] Selvalakshmi B, Subramaniam M. “Intelligent ontology based semantic information retrieval using feature selection and classification”. Cluster Computing, 22(5), 12871-12881, 2019.
  • [18] Du X, Li W, Ruan S, Li L. “CUS-heterogeneous ensemble- based financial distress prediction for imbalanced dataset with ensemble feature selection”. Applied Soft Computing, 97, 1-13, 2020.
  • [19] Yousef M, Kumar A, Bakir-Gungor B. “Application of biological domain knowledge based feature selection on gene expression data”. Entropy, 23(1), 1-15, 2021.
  • [20] Akalın F, Yumusak N. “Classification of acute leukaemias with a hybrid use of feature selection algorithms and deep learning-based architectures”. Pamukkale University Journal of Engineering Sciences, 29(3), 256-263, 2023.
  • [21] Mohammadi S, Mirvaziri H, Ghazizadeh-Ahsaee M, Karimipour H. “Cyber intrusion detection by combined feature selection algorithm”. Journal of information security and applications, 44, 80-88, 2019.
  • [22] Sharif M, Khan MA, Iqbal Z, Azam MF, Lali MIU, Javed MY. “Detection and classification of citrus diseases in agricul- ture based on optimized weighted segmentation and feature selection”. Computers and Electronics in Agriculture, 150, 220-234, 2018.
  • [23] Rodriguez-Galiano VF, Luque-Espinar JA, Chica-Olmo M, Mendes MP. “Feature selection approaches for predictive modelling of groundwater nitrate pollution: An evaluation of filters, embedded and wrapper methods”. Science of the Total Environment, 624, 661- 672, 2018.
  • [24] Xue B, Zhang M, Browne WN, Yao X. “A survey on evolutionary computation approaches to feature selection”. IEEE Transactions on Evolutionary Computation, 20(4), 606-626, 2015.
  • [25] Sharma S, Kaur P. “A comprehensive analysis of nature-inspired meta-heuristic techniques for feature selection problem”. Archives of Computational Methods in Engineering, 28(3), 1103-1127, 2021.
  • [26] Kumar V, Minz S. “Feature selection: a literature review”. SmartCR, 4(3), 211-229, 2014.
  • [27] Bolon-Canedo V, Sanchez-Marono N, Alonso-Betanzos A, “A review of feature selection methods on synthetic data”. Knowledge and Information Systems, 34(3), 483-519, 2013.
  • [28] Li H, Li CJ, Wu XJ, Sun J. “Statistics-based wrapper for feature selection: An implementation on financial distress identification with support vector machine”. Applied Soft Computing, 19, 57- 67, 2014.
  • [29] Cui L, Bai L, Wang Y, Jin X, Hancock ER. “Internet financing credit risk evaluation using multiple structural interacting elastic net feature selection”. Pattern Recognition, 114, 1-13, 2021.
  • [30] Jadhav S, He H, Jenkins K. “Information gain directed genetic algorithm wrapper feature selection for credit rating”. Applied Soft Computing, 69, 541-553, 2018.
  • [31] Liang D, Tsai CF, Wu HT. “The effect of feature selection on financial distress prediction”. Knowledge-Based Systems, 73, 289-297, 2015.
  • [32] Sivasankar E, Selvi C, Mahalakshmi S. “Rough set-based feature selection for credit risk prediction using weight-adjusted boosting en-semble method”. Soft Computing, 24(6), 3975-3988, 2020.
  • [33] Zhang X, Hu Y, Xie K, Wang S, Ngai E, Liu M. “A causal feature selection algorithm for stock prediction modeling”. Neurocomputing, 142, 48-59, 2014.
  • [34] Lin WC, Lu YH, Tsai CF. “Feature selection in single and ensemble learning-based bankruptcy prediction models”. Expert Systems, 36(1), 1-8, 2019.
  • [35] Adadi A, Berrada M. “Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)”. IEEE Access, 6, 52138-52160, 2018.
  • [36] Yıldırım-Okay F, Yıldırım M, and Ozdemir S. “Interpretable machine learning: A case study of healthcare”. in 2021 International Symposium on Networks, Computers and Communications (ISNCC). IEEE, Dubai, United Arab Emirates, 31 October-2 November 2021.
  • [37] ElShawi R, Sherif Y, Al-Mallah M, Sakr S. “Interpretability in healthcare: A comparative study of local machine learning interpretability techniques”. Computational Intelligence, 37(4), 1633-1650, 2021.
  • [38] Zablocki E, Ben-Younes H, Pe´rez P, Cord M. “Explainability of Vision-Based Autonomous Driving Systems: Review and Challenges”. arXiv, 2021. https://arxiv.org/pdf/2101.05307.pdf.
  • [39] Demajo LM, Vella V, Dingli A. “Explainable AI for interpretable credit scoring”. arXiv, 2020. https://arxiv.org/pdf/2012.03749.pdf.
  • [40] Bussmann N, Giudici P, Marinelli D, Papenbrock J. “Explainable AI in fintech risk management”. Frontiers in Artificial Intelligence, 3(26), 1-5, 2020.
  • [41] Ito T, Sakaji H, Izumi K, Tsubouchi K, Yamashita T. “GINN: Gradient interpretable neural networks for visualizing financial texts”. International Journal of Data Science and Analytics, 9(4), 431-445, 2020.
  • [42] Cong LW, Tang K, Wang J, Zhang Y. “AlphaPortfolio: Direct construction through reinforcement learning and interpretable AI”. Available at SSRN, 2021. https://ssrn.com/abstract=3554486
  • [43] Grath RM, Costabello L, Van CL, Sweeney P, Kamiab F, Shen Z, Lecue F. “Interpretable credit application predictions with counterfactual explanations”. arXiv, 2018. https://arxiv.org/pdf/1811.05245.pdf.
  • [44] Ghosh I, Dragan P. “Can financial stress be anticipated and explained? Uncovering the hidden pattern using EEMD-LSTM, EEMD-prophet, and XAI methodologies”. Expert Systems with Applications, 9, 4169-4193, 2023.
  • [45] Babaei G, Giudici P. “Which SME is worth an investment? An explainable machine learning approach. An explainable machine learning approach”. Available at SSRN, 2021. https://ssrn.com/abstract=3810618
  • [46] Tra KL, Le HA, Nguyen TH, Nguyen, DT. “Explainable machine learning for financial distress prediction: evidence from Vietnam”. Data, 7(11), 1-12, 2022.
  • [47] Ariza-Garzón MJ, Arroyo J, Caparrini A, & Segovia-Vargas MJ. “Explainability of a machine learning granting scoring model in peer-to-peer lending”. IEEE Access, 8, 64873-64890, 2020.
  • [48] Misheva BH, Osterrieder J, Hirsa A, Kulkarni O, Lin, SF. “Explainable AI in credit risk management”. arXiv, 2021. https://arxiv.org/pdf/2103.00949.pdf.
  • [49] Bracke P, Datta A, Jung C, Sen, S. “Machine Learning Explainability in Finance: An Application to Default Risk Analysis”. Available at SSRN, 2019. https://ssrn.com/abstract=3435104
  • [50] Chandrashekar G, Sahin F. “A survey on feature selection methods”. Computers & Electrical Engineering, 40(1), 16-28, 2014.
  • [51] Li J, Cheng K, Wang S, Morstatter F, Trevino RP, Tang J, Liu H. “Feature selection: A data perspective”. ACM Computing Surveys (CSUR), 50(6), 1-45, 2017.
  • [52] Saeys Y, Inza I, Larranaga P. “A review of feature selection techniques in bioinformatics”. Bioinformatics, 23(19), 2507- 2517, 2007.
  • [53] Carvalho DV, Pereira EM, Cardoso JS. “Machine learning interpretability: A survey on methods and metrics”. Electronics, 8(8), 1-34, 2019.
  • [54] Vellido A, Martin-Guerrero JD, Lisboa PJ. “Making machine learning models interpretable.” in European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), Bruge, Belgium, 25-27 April 2012.
  • [55] Molnar C. “Interpretable Machine Learning: A Guide for Making Black Box Models explainable”. https://christophm.github.io/interpretable-ml-book/ (14.05.2024)
  • [56] Ruping S. Learning Interpretable Models. Ph.D. Thesis, University of Dortmund, Dortmund, Germany, 2006.
  • [57] Robnik-Sikonja M, Bohanec M. Perturbation-Based Explanations of Prediction Models. Editors: Zhou J, Chen F. Human and Machine Learning, 159-175, Springer, Cham, 2018.
  • [58] Mothilal RK, Sharma A, Tan C. “Explaining machine learning classifiers through diverse counterfactual explanations”. in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, NY, USA, 27-30 January 2020.
  • [59] Zikeba M, Tomczak SK, Tomczak JM. “Ensemble boosted trees with synthetic features generation in application to bankruptcy prediction”. Expert Systems with Applications, 58, 93-101, 2016.
  • [60] Real R, Vargas JM. “The probabilistic basis of jaccard’s index of similarity”. Systematic Biology, 45(3), 380-385, 1996.
  • [61] Tata S, Patel JM. “Estimating the selectivity of tf-idf based cosine similarity predicates”. ACM Sigmod Record, 36(2), 7-12, 2007.
There are 61 citations in total.

Details

Primary Language English
Subjects Data Structures and Algorithms
Journal Section Research Article
Authors

Mustafa Yildirim This is me

Feyza Yıldırım Okay

Suat Özdemir

Publication Date August 30, 2024
Published in Issue Year 2024 Volume: 30 Issue: 4

Cite

APA Yildirim, M., Yıldırım Okay, F., & Özdemir, S. (2024). A comparative analysis on the reliability of interpretable machine learning. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi, 30(4), 494-508.
AMA Yildirim M, Yıldırım Okay F, Özdemir S. A comparative analysis on the reliability of interpretable machine learning. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi. August 2024;30(4):494-508.
Chicago Yildirim, Mustafa, Feyza Yıldırım Okay, and Suat Özdemir. “A Comparative Analysis on the Reliability of Interpretable Machine Learning”. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi 30, no. 4 (August 2024): 494-508.
EndNote Yildirim M, Yıldırım Okay F, Özdemir S (August 1, 2024) A comparative analysis on the reliability of interpretable machine learning. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi 30 4 494–508.
IEEE M. Yildirim, F. Yıldırım Okay, and S. Özdemir, “A comparative analysis on the reliability of interpretable machine learning”, Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi, vol. 30, no. 4, pp. 494–508, 2024.
ISNAD Yildirim, Mustafa et al. “A Comparative Analysis on the Reliability of Interpretable Machine Learning”. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi 30/4 (August 2024), 494-508.
JAMA Yildirim M, Yıldırım Okay F, Özdemir S. A comparative analysis on the reliability of interpretable machine learning. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi. 2024;30:494–508.
MLA Yildirim, Mustafa et al. “A Comparative Analysis on the Reliability of Interpretable Machine Learning”. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi, vol. 30, no. 4, 2024, pp. 494-08.
Vancouver Yildirim M, Yıldırım Okay F, Özdemir S. A comparative analysis on the reliability of interpretable machine learning. Pamukkale Üniversitesi Mühendislik Bilimleri Dergisi. 2024;30(4):494-508.





Creative Commons Lisansı
Bu dergi Creative Commons Al 4.0 Uluslararası Lisansı ile lisanslanmıştır.