Research Article
BibTex RIS Cite

Hibrit Açıklanabilir Yapay Zeka Tasarımı ve LIME Uygulaması

Year 2021, Issue: 27, 228 - 236, 30.11.2021
https://doi.org/10.31590/ejosat.959030

Abstract

Günümüz teknolojisinin hızlı gelişimi ile yapay zeka günlük yaşantının bir çok alanında vazgeçilmez hale gelmiştir. Özellikle yanlış karar verme maliyetinin yüksek olduğu finans, sağlık, hukuk gibi alanlarda kullanılmaya başlamasına rağmen bu alanlardaki düzenlemeler nedeni ile kısıtlı düzeyde kullanılabilmektedir. Bu kısıtlamanın en temel nedeni de elde edilen yüksek performanslı sonuçların açıklanabilirliklerinin düşük olmasıdır. Bu çalışma kapsamında açıklanabilirliği yüksek hibrit bir tasarım önerilmiş, finans ve sağlık alanlarından elde edilmiş farklı veri setlerine uygulanmıştır. Şeffaf, hibrit ve açıklanabilirlik olmak üzere üç temel aşamada gerçekleştirilen hibrit yaklaşımın açıklanabilirlik aşamasında yerel olarak seçilen gözlemlerin tahmininde değişkenlerin etkisini belirlemek için LIME ölçütü kullanılmış ve sonuçlar yorumlanmıştır.

References

  • Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  • Breiman, L. (2001). Random Forests. Machine Learning, 45, 5–32. https://doi.org/https://doi.org/10.1023/A:1010933404324
  • Credit Score Accuracy and Implications for Consumers. (2002).
  • Dua, D., & Graff, C. (2019). UCI Machine Learning Repository. http://archive.ics.uci.edu/ml
  • Freitas, A. A. (2014). Comprehensible classification models. ACM SIGKDD Explorations Newsletter, 15(1), 1–10. https://doi.org/10.1145/2594473.2594475
  • Gambacorta, L., Huang, Y., Qiu, H., & Wang, J. (2019). How do machine learning and non-traditional data effect credit scoring? New evidence from Chinese fintech firm (Issue 834).
  • Garreau, D., & von Luxburg, U. (2020). Looking Deeper into Tabular LIME. http://arxiv.org/abs/2008.11092
  • Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 1–45. https://doi.org/10.1145/3236009
  • High-Level Expert Group on AI (AI HLEG). (2019). Ethics Guidelines for Trustworthy AI. In Ethics Guidelines for Trustworthy AI. Independent High-Level Expert Group on Artificial Intelligence Set up by the European Commission - Ethics Guidelines for Trustworthy AI. https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf
  • Ingold, D., & Spencer, S. (2016). Amazon Doesn’t Consider the Race of Its Customers. Should It? https://www.bloomberg.com/graphics/2016-amazon-same-day/
  • Keeble, B. R. (1988). The Brundtland Report: “Our Common Future.” In Medicine and War (Vol. 4, Issue 1, pp. 17–25). https://doi.org/10.1080/07488008808408783
  • Kirchner, L., Mattu, S., Larson, J., & Angwin, J. (2016). Machine Bias. Propublica, 1–26. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  • Malioutov, D. M., Varshney, K. R., Emad, A., & Dash, S. (2017). Learning Interpretable Classification Rules with Boolean Compressed Sensing. 95–121. https://doi.org/10.1007/978-3-319-54024-5_5
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016a). Nothing Else Matters: Model-Agnostic Explanations by Identifying Prediction Invariance. 30th Conference on Neural Information Processing Systems (NIPS2016). http://arxiv.org/abs/1611.05817
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016b). “Why Should I Trust You?” Explaining the Predictions of Any Classifier Marco. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2016). Association for Computing Machinery, 13-17-Augu, 1135–1144. https://doi.org/10.1145/2939672.2939778
  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x
  • Wolberg, W. H., & Mangasariant, O. L. (1990). Multisurface method of pattern separation for medical diagnosis applied to breast cytology (linear programniing/pattern recognition/expert systems/cancer diagnosis). In Proc. Natl. Acad. Sci. USA (Vol. 87).
  • Yeh, I.-C., & Lien, C. (2009). The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Systems with Applications, 36(2), 2473–2480. https://doi.org/10.1016/j.eswa.2007.12.020

Hybrid Explainable Artifical Intelligence Design and LIME Application

Year 2021, Issue: 27, 228 - 236, 30.11.2021
https://doi.org/10.31590/ejosat.959030

Abstract

With the rapid development of today's technology, artificial intelligence has become indispensable in many areas of daily life. Although it has started to be used in the areas such as finance, health and law, where the cost of making wrong decisions is high, it can be used at a limited level due to the regulations in these areas. The main reason for this limitation is the low explainability of the high-performance results obtained. Within the scope of this study, a hybrid design with high explainability was proposed and applied to different datasets obtained from the fields of finance and health. The LIME criterion was used to determine the effect of variables on the estimation of locally selected observations in the explainability phase of the hybrid approach, which was carried out in three basic stages as transparent, hybrid and explainability, and the results were interpreted.

References

  • Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  • Breiman, L. (2001). Random Forests. Machine Learning, 45, 5–32. https://doi.org/https://doi.org/10.1023/A:1010933404324
  • Credit Score Accuracy and Implications for Consumers. (2002).
  • Dua, D., & Graff, C. (2019). UCI Machine Learning Repository. http://archive.ics.uci.edu/ml
  • Freitas, A. A. (2014). Comprehensible classification models. ACM SIGKDD Explorations Newsletter, 15(1), 1–10. https://doi.org/10.1145/2594473.2594475
  • Gambacorta, L., Huang, Y., Qiu, H., & Wang, J. (2019). How do machine learning and non-traditional data effect credit scoring? New evidence from Chinese fintech firm (Issue 834).
  • Garreau, D., & von Luxburg, U. (2020). Looking Deeper into Tabular LIME. http://arxiv.org/abs/2008.11092
  • Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 1–45. https://doi.org/10.1145/3236009
  • High-Level Expert Group on AI (AI HLEG). (2019). Ethics Guidelines for Trustworthy AI. In Ethics Guidelines for Trustworthy AI. Independent High-Level Expert Group on Artificial Intelligence Set up by the European Commission - Ethics Guidelines for Trustworthy AI. https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.pdf
  • Ingold, D., & Spencer, S. (2016). Amazon Doesn’t Consider the Race of Its Customers. Should It? https://www.bloomberg.com/graphics/2016-amazon-same-day/
  • Keeble, B. R. (1988). The Brundtland Report: “Our Common Future.” In Medicine and War (Vol. 4, Issue 1, pp. 17–25). https://doi.org/10.1080/07488008808408783
  • Kirchner, L., Mattu, S., Larson, J., & Angwin, J. (2016). Machine Bias. Propublica, 1–26. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  • Malioutov, D. M., Varshney, K. R., Emad, A., & Dash, S. (2017). Learning Interpretable Classification Rules with Boolean Compressed Sensing. 95–121. https://doi.org/10.1007/978-3-319-54024-5_5
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016a). Nothing Else Matters: Model-Agnostic Explanations by Identifying Prediction Invariance. 30th Conference on Neural Information Processing Systems (NIPS2016). http://arxiv.org/abs/1611.05817
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016b). “Why Should I Trust You?” Explaining the Predictions of Any Classifier Marco. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2016). Association for Computing Machinery, 13-17-Augu, 1135–1144. https://doi.org/10.1145/2939672.2939778
  • Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x
  • Wolberg, W. H., & Mangasariant, O. L. (1990). Multisurface method of pattern separation for medical diagnosis applied to breast cytology (linear programniing/pattern recognition/expert systems/cancer diagnosis). In Proc. Natl. Acad. Sci. USA (Vol. 87).
  • Yeh, I.-C., & Lien, C. (2009). The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Systems with Applications, 36(2), 2473–2480. https://doi.org/10.1016/j.eswa.2007.12.020
There are 18 citations in total.

Details

Primary Language Turkish
Subjects Engineering
Journal Section Articles
Authors

Ayça Çakmak Pehlivanlı 0000-0001-9884-6538

Rahmi Ahmet Selim Deliloğlu 0000-0003-1216-3918

Early Pub Date July 29, 2021
Publication Date November 30, 2021
Published in Issue Year 2021 Issue: 27

Cite

APA Çakmak Pehlivanlı, A., & Deliloğlu, R. A. S. (2021). Hibrit Açıklanabilir Yapay Zeka Tasarımı ve LIME Uygulaması. Avrupa Bilim Ve Teknoloji Dergisi(27), 228-236. https://doi.org/10.31590/ejosat.959030