Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2023, Cilt: 7 Sayı: 2, 128 - 141, 30.09.2023
https://doi.org/10.30516/bilgesci.1317525

Öz

Kaynakça

  • Agarwal, R., Melnick, L., Frosst, N., Zhang, X., Lengerich, B., Caruana, R., & Hinton, G. E. (2021). Neural additive models: Interpretable machine learning with neural nets. Advances in Neural Information Processing Systems, 34, 4699-4711.
  • Altreas, N., Tsarapatsanis, D., Preoţiuc-Pietro, D., & Lampos, V. (2016). Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective. PeerJ Comput Sci.
  • Anders, C. J., Neumann, D., Samek, W., Müller, K. R., & Lapuschkin, S. (2021). Software for dataset-wide XAI: from local explanations to global insights with Zennit, CoRelAy, and ViRelAy. arXiv preprint arXiv:2106.13200.
  • Antos, A., Nadhamuni, N. (2021). Practical guide to artificial intelligence and contract review. In: Research Handbook on Big Data Law, ed. Vogl, R., 467-481, Edward Elgar Publishing.
  • Bistron, M., Piotrowski, Z. (2021). Artificial intelligence applications in military systems and their influence on sense of security of citizens. Electronics, 10(7), 871.
  • Brereton, RG., Lloyd, GR. (2010). Support vector machines for classification and regression. Analyst, 135(2), 230-267.
  • Chalkidis, I., Androutsopoulos, I., Aletras, N. (2019). Neural legal judgment prediction in English. arXiv preprint arXiv:1906.02059.
  • Chen, L., Chen, P., Lin, Z. (2020). Artificial intelligence in education: A review. Ieee Access, 8, 75264-75278.
  • Chen, T., Guestrin, C. (2016). Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 785-794, Association for Computing Machinery, New York, United States.
  • Colaner, N. (2021). Is explainable artificial intelligence intrinsically valuable? AI & SOCIETY, 37, 231-238.
  • Collenette, J., Atkinson, K., Bench-Capon, T. J. (2020). An Explainable Approach to Deducing Outcomes in European Court of Human Rights Cases Using ADFs, In: COMMA, ed. Prakken, H., Bistarelli, S. and Santini, F., 21-32, IOS Press.
  • Das, A., Rad, P. (2020). Opportunities and challenges in explainable artificial intelligence (xai): A survey. arXiv preprint arXiv:2006.11371.
  • Di Vaio, A., Palladino, R., Hassan, R., Escobar, O. (2020). Artificial intelligence and business models in the sustainable development goals perspective: A systematic literature review. Journal of Business Research, 121, 283-314.
  • Dong, W., Huang, Y., Lehane, B., Ma, G. (2020). XGBoost algorithm-based prediction of concrete electrical resistivity for structural health monitoring. Automation in Construction, 114, 103155.
  • Doshi-Velez, F., Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
  • Fletcher, S., Islam, M. Z. (2019). Decision tree classification with differential privacy: A survey. ACM Computing Surveys (CSUR), 52(4), 1-33.
  • Gan, L., Li, B., Kuang, K., Yang, Y., & Wu, F. (2022). Exploiting Contrastive Learning and Numerical Evidence for Improving Confusing Legal Judgment Prediction. arXiv preprint arXiv:2211.08238.
  • Ghorbani, A., Wexler, J., Zou, J. Y., Kim, B. (2019). Towards automatic concept-based explanations. Advances in Neural Information Processing Systems, 32.
  • Ghosh, S., Dasgupta, A., Swetapadma, A. (2019). A study on support vector machine based linear and non-linear pattern classification. In: 2019 International Conference on Intelligent Sustainable Systems (ICISS) (pp. 24-28). IEEE.
  • Goertzel, B., Pennachin, C. (2007). The Novamente artificial intelligence engine. Artificial general intelligence, 63-129.
  • Górski, Ł., Ramakrishna, S. (2021, June). Explainable artificial intelligence, lawyer's perspective. In: Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law (pp. 60-68).
  • Gorski, L., Ramakrishna, S., Nowosielski, J. M. (2020). Towards grad-cam based explainability in a legal text processing pipeline. arXiv preprint arXiv:2012.09603.
  • Gunning, D., Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI magazine, 40(2), 44-58.
  • Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., Yang, G. Z. (2019). XAI—Explainable artificial intelligence. Science robotics, 4(37).
  • Guo, X., Zhang, H., Ye, L., Li, S. (2021). TenLa: an approach based on controllable tensor decomposition and optimized lasso regression for judgement prediction of legal cases. Applied Intelligence, 51, 2233-2252.
  • Ivanovs, M., Kadikis, R., Ozols, K. (2021). Perturbation-based methods for explaining deep neural networks: A survey. Pattern Recognition Letters, 150, 228-234.
  • Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Y. (2017). Artificial intelligence in healthcare: past, present and future. Stroke and vascular neurology, 2(4).
  • Jiang, H., He, Z., Ye, G., Zhang, H. (2020). Network intrusion detection based on PSO-XGBoost model. IEEE Access, 8, 58392-58401.
  • Katz, D. M., Bommarito, M. J., Blackman, J. (2017). A general approach for predicting the behavior of the Supreme Court of the United States. PloS one, 12(4).
  • Kaur, A., Bozic, B. (2019). Convolutional Neural Network-based Automatic Prediction of Judgments of the European Court of Human Rights. In: AICS, pp 458-469
  • Kenny, E. M., Ford, C., Quinn, M., Keane, M. T. (2021). Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies. Artificial Intelligence, 294, 103459.
  • Knox, J. (2020). Artificial intelligence and education in China. Learning, Media and Technology, 45(3), 298-311.
  • Labin, S., Segal, U. (2021). AI-driven contract review: A product development journey. In Research Handbook on Big Data Law, 454-466, Edward Elgar Publishing.
  • Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., Müller, K. R. (2019). Unmasking Clever Hans predictors and assessing what machines really learn. Nature communications, 10(1), 1096.
  • Letham, B., Rudin, C., McCormick, T. H., Madigan, D. (2012). Building interpretable classifiers with rules using Bayesian analysis. Department of Statistics Technical Report tr609, University of Washington, 9(3), 1350-1371.
  • Li, S., Zhang, H., Ye, L., Guo, X., Fang, B. (2019). Mann: A multichannel attentive neural network for legal judgment prediction. IEEE Access, 7, 151144-151155.
  • Lin, Y. S., Lee, W. C., Celik, Z. B. (2021). What do you see? Evaluation of explainable artificial intelligence (XAI) interpretability through neural backdoors. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (pp. 1027-1035).
  • Long, S., Tu, C., Liu, Z., Sun, M. (2019). Automatic judgment prediction via legal reading comprehension. In: Chinese Computational Linguistics: 18th China National Conference, 558-572, Springer International Publishing.
  • Loureiro, S. M. C., Guerreiro, J., Tussyadiah, I. (2021). Artificial intelligence in business: State of the art and future research agenda. Journal of business research, 129, 911-926.
  • Lundberg, SM., Lee, SI. (2017). A unified approach to interpreting model predictions. In: Proceedings of the 31st international conference on neural information processing systems, 30, 4768-4777.
  • Ma, L., Zhang, Y., Wang, T., Liu, X., Ye, W., Sun, C., Zhang, S. (2021). Legal judgment prediction with multi-stage case representation learning in the real court setting. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 993-1002).
  • Mangalathu, S., Hwang, S. H., Jeon, J. S. (2020). Failure mode and effects analysis of RC members based on machine-learning-based SHapley Additive exPlanations (SHAP) approach. Engineering Structures, 219, 110927.
  • Mantovani, R. G., Horváth, T., Cerri, R., Vanschoren, J., De Carvalho, A. C. (2016, October). Hyper-parameter tuning of a decision tree induction algorithm. In: 2016 5th Brazilian Conference on Intelligent Systems (BRACIS) (pp. 37-42). IEEE.
  • Meço, G., Çoştu, F. (2022). Eğitimde Yapay Zekânın Kullanılması: Betimsel İçerik Analizi Çalışması. Karadeniz Teknik Üniversitesi Sosyal Bilimler Enstitüsü Sosyal Bilimler Dergisi, 12(23), 171-193.
  • Mumcuoğlu, E., Öztürk, C. E., Ozaktas, H. M., Koç, A. (2021). Natural language processing in law: Prediction of outcomes in the higher courts of Turkey. Information Processing & Management, 58(5), 102684.
  • Mumford, J., Atkinson, K., Bench-Capon, T. (2021). Machine learning and legal argument. In: CEUR Workshop Proceedings (Vol. 2937, pp. 47-56).
  • Nanfack, G., Temple, P., Frénay, B. (2022). Constraint Enforcement on Decision Trees: A Survey. ACM Computing Surveys (CSUR), 54(10s), 1-36.
  • Nie, W., Zhang, Y., Patel, A. (2018). A theoretical explanation for perplexing behaviors of backpropagation-based visualizations. In: International Conference on Machine Learning, PMLR, 3809-3818.
  • Nikam, SS. (2015). A comparative study of classification techniques in data mining algorithms. Oriental Journal of Computer Science and Technology, 8(1), 13-19.
  • Niklaus, J., Chalkidis, I., Stürmer, M. (2021). Swiss-judgment-prediction: A multilingual legal judgment prediction benchmark. arXiv preprint arXiv:2110.00806.
  • Niklaus, J., Stürmer, M., Chalkidis, I. (2022). An Empirical Study on Cross-X Transfer for Legal Judgment Prediction. arXiv preprint arXiv:2209.12325.
  • Ribeiro, M. T., Singh, S., Guestrin, C. (2016). Model-agnostic interpretability of machine learning. arXiv preprint arXiv:1606.05386.
  • Rodríguez-Pérez, R., Bajorath, J. (2020). Interpretation of machine learning models using shapley values: application to compound potency and multi-target activity predictions. Journal of computer-aided molecular design, 34, 1013-1026.
  • Rokach, L. (2016). Decision forest: Twenty years of research. Information Fusion, 27, 111-125.
  • Roll, I., Wylie, R. (2016). Evolution and revolution in artificial intelligence in education. International Journal of Artificial Intelligence in Education, 26, 582-599.
  • Schratz, P., Muenchow, J., Iturritxa, E., Richter, J., Brenning, A. (2019). Hyperparameter tuning and performance assessment of statistical and machine-learning algorithms using spatial data. Ecological Modelling, 406, 109-120.
  • Semo, G., Bernsohn, D., Hagag, B., Hayat, G., Niklaus, J. (2022). ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US. arXiv preprint arXiv:2211.00582.
  • Sert, M. F., Yıldırım, E., Haşlak, İ. (2022). Using artificial intelligence to predict decisions of the Turkish constitutional court. Social Science Computer Review, 40(6), 1416-1435.
  • Shahid, R., Bertazzon, S., Knudtson, M. L., Ghali, W. A. (2009). Comparison of distance measures in spatial analytical modeling for health service planning. BMC health services research, 9(1), 1-14.
  • Simonyan, K., Vedaldi, A., Zisserman, A. (2013). Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.
  • Singh, A., Yadav, A., Rana, A. (2013). K-means with Three different Distance Metrics. International Journal of Computer Applications, 67(10), 13-17.
  • Socatiyanurak, V., Klangpornkun, N., Munthuli, A., Phienphanich, P., Kovudhikulrungsri, L., Saksakulkunakorn, N., Tantibundhit, C. (2021). Law-u: Legal guidance through artificial intelligence chatbot for sexual violence victims and survivors. IEEE Access, 9, 131440-131461.
  • Somvanshi, M., Chavan, P., Tambade, S., Shinde, S. V. (2016, August). A review of machine learning techniques using decision tree and support vector machine. In 2016 international conference on computing communication control and automation (ICCUBEA) (pp. 1-7). IEEE.
  • Spinner, T., Schlegel, U., Schäfer, H., El-Assady, M. (2019). explAIner: A visual analytics framework for interactive and explainable machine learning. IEEE transactions on visualization and computer graphics, 26(1), 1064-1074.
  • Stevenson, D., Wagoner, N. J. (2015). Bargaining in the shadow of big data. Fla. L. Rev., 67, 1337.
  • Strickson, B., De La Iglesia, B. (2020, March). Legal judgement prediction for uk courts. In Proceedings of the 3rd International Conference on Information Science and Systems (pp. 204-209).
  • Tideman, L. E., Migas, L. G., Djambazova, K. V., Patterson, N. H., Caprioli, R. M., Spraggins, J. M., Van de Plas, R. (2021). Automated biomarker candidate discovery in imaging mass spectrometry data through spatially localized Shapley additive explanations. Analytica Chimica Acta, 1177, 338522.
  • Tritscher, J., Ring, M., Schlr, D., Hettinger, L., & Hotho, A. (2020). Evaluation of post-hoc XAI approaches through synthetic tabular data. In Foundations of Intelligent Systems: 25th International Symposium, ISMIS 2020, Graz, Austria, September 23–25, 2020, Proceedings (pp. 422-430). Springer International Publishing.
  • Turan. T., Turan, G., Köse, U. (2022). Uyarlamalı Ağ Tabanlı Bulanık Mantık Çıkarım Sistemi ve Yapay Sinir Ağları ile Türkiye’deki COVID-19 Vefat Sayısının Tahmin Edilmesi. Bilişim Teknolojileri Dergisi 15(2), 97-105.
  • Weber, T., Wermter, S. (2020). Integrating intrinsic and extrinsic explainability: The relevance of understanding neural networks for human-robot interaction. arXiv preprint arXiv:2010.04602.
  • Xiao, C., Zhong, H., Guo, Z., Tu, C., Liu, Z., Sun, M., Xu, J. (2018). Cail2018: A large-scale legal dataset for judgment prediction. arXiv preprint arXiv:1807.02478.
  • Xu, Z. (2022). Human judges in the era of artificial intelligence: challenges and opportunities. Applied Artificial Intelligence, 36(1), 2013652.
  • Yan, G., Li, Y., Shen, S., Zhang, S., Liu, J. (2019, July). Law article prediction based on deep learning. In 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C) (pp. 281-284). IEEE.
  • Yang, L., Zeng, J., Peng, T., Luo, X., Zhang, J., Lin, H. (2020, October). Leniency to those who confess. Predicting the Legal Judgement via Multi-Modal Analysis. In Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 645-649).
  • Yu, K. H., Beam, A. L., Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature biomedical engineering, 2(10), 719-731.
  • Zadgaonkar, A. V., Agrawal, A. J. (2021). An overview of information extraction techniques for legal document analysis and processing. International Journal of Electrical & Computer Engineering (2088-8708), 11(6).
  • Zeiler, M. D., Fergus, R. (2014). Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13 (pp. 818-833). Springer International Publishing.
  • Zhong, H., Guo, Z., Tu, C., Xiao, C., Liu, Z., Sun, M. (2018). Legal judgment prediction via topological learning. In Proceedings of the 2018 conference on empirical methods in natural language processing (pp. 3540-3549).

Prediction of Turkish Constitutional Court Decisions with Explainable Artificial Intelligence

Yıl 2023, Cilt: 7 Sayı: 2, 128 - 141, 30.09.2023
https://doi.org/10.30516/bilgesci.1317525

Öz

Using artificial intelligence in law is a topic that has attracted attention in recent years. This study aims to classify the case decisions taken by the Constitutional Court of the Republic of Turkey. For this purpose, open-access data published by the Constitutional Court of the Republic of Turkey on the website of the Decisions Information Bank were used in this research. KNN (K-Nearest Neighbors Algorithm), SVM (Support Vector Machine), DT (Decision Tree), RF (Random Forest), and XGBoost (Extreme Gradient Boosting) machine learning (ML) algorithms are used. Precision, Recall, F1-Score, and Accuracy metrics were used to compare the results of these models. As a result of the evaluation showed that the XGBoost model gave the best results with 93.84% Accuracy, 93% Precision, 93% Recall, and 93% F1-Score. It is important that the model result is not only good but also transparent and interpretable. Therefore, in this article, using the SHAP (SHapley Additive exPlanations) method, one of the explainable artificial intelligence techniques, the features that affect the classification of case results are explained. The study is the first study carried out in our country to use explainable artificial intelligence techniques in predicting court decisions in the Republic of Turkey with artificial intelligence.

Kaynakça

  • Agarwal, R., Melnick, L., Frosst, N., Zhang, X., Lengerich, B., Caruana, R., & Hinton, G. E. (2021). Neural additive models: Interpretable machine learning with neural nets. Advances in Neural Information Processing Systems, 34, 4699-4711.
  • Altreas, N., Tsarapatsanis, D., Preoţiuc-Pietro, D., & Lampos, V. (2016). Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective. PeerJ Comput Sci.
  • Anders, C. J., Neumann, D., Samek, W., Müller, K. R., & Lapuschkin, S. (2021). Software for dataset-wide XAI: from local explanations to global insights with Zennit, CoRelAy, and ViRelAy. arXiv preprint arXiv:2106.13200.
  • Antos, A., Nadhamuni, N. (2021). Practical guide to artificial intelligence and contract review. In: Research Handbook on Big Data Law, ed. Vogl, R., 467-481, Edward Elgar Publishing.
  • Bistron, M., Piotrowski, Z. (2021). Artificial intelligence applications in military systems and their influence on sense of security of citizens. Electronics, 10(7), 871.
  • Brereton, RG., Lloyd, GR. (2010). Support vector machines for classification and regression. Analyst, 135(2), 230-267.
  • Chalkidis, I., Androutsopoulos, I., Aletras, N. (2019). Neural legal judgment prediction in English. arXiv preprint arXiv:1906.02059.
  • Chen, L., Chen, P., Lin, Z. (2020). Artificial intelligence in education: A review. Ieee Access, 8, 75264-75278.
  • Chen, T., Guestrin, C. (2016). Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, 785-794, Association for Computing Machinery, New York, United States.
  • Colaner, N. (2021). Is explainable artificial intelligence intrinsically valuable? AI & SOCIETY, 37, 231-238.
  • Collenette, J., Atkinson, K., Bench-Capon, T. J. (2020). An Explainable Approach to Deducing Outcomes in European Court of Human Rights Cases Using ADFs, In: COMMA, ed. Prakken, H., Bistarelli, S. and Santini, F., 21-32, IOS Press.
  • Das, A., Rad, P. (2020). Opportunities and challenges in explainable artificial intelligence (xai): A survey. arXiv preprint arXiv:2006.11371.
  • Di Vaio, A., Palladino, R., Hassan, R., Escobar, O. (2020). Artificial intelligence and business models in the sustainable development goals perspective: A systematic literature review. Journal of Business Research, 121, 283-314.
  • Dong, W., Huang, Y., Lehane, B., Ma, G. (2020). XGBoost algorithm-based prediction of concrete electrical resistivity for structural health monitoring. Automation in Construction, 114, 103155.
  • Doshi-Velez, F., Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
  • Fletcher, S., Islam, M. Z. (2019). Decision tree classification with differential privacy: A survey. ACM Computing Surveys (CSUR), 52(4), 1-33.
  • Gan, L., Li, B., Kuang, K., Yang, Y., & Wu, F. (2022). Exploiting Contrastive Learning and Numerical Evidence for Improving Confusing Legal Judgment Prediction. arXiv preprint arXiv:2211.08238.
  • Ghorbani, A., Wexler, J., Zou, J. Y., Kim, B. (2019). Towards automatic concept-based explanations. Advances in Neural Information Processing Systems, 32.
  • Ghosh, S., Dasgupta, A., Swetapadma, A. (2019). A study on support vector machine based linear and non-linear pattern classification. In: 2019 International Conference on Intelligent Sustainable Systems (ICISS) (pp. 24-28). IEEE.
  • Goertzel, B., Pennachin, C. (2007). The Novamente artificial intelligence engine. Artificial general intelligence, 63-129.
  • Górski, Ł., Ramakrishna, S. (2021, June). Explainable artificial intelligence, lawyer's perspective. In: Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law (pp. 60-68).
  • Gorski, L., Ramakrishna, S., Nowosielski, J. M. (2020). Towards grad-cam based explainability in a legal text processing pipeline. arXiv preprint arXiv:2012.09603.
  • Gunning, D., Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI magazine, 40(2), 44-58.
  • Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., Yang, G. Z. (2019). XAI—Explainable artificial intelligence. Science robotics, 4(37).
  • Guo, X., Zhang, H., Ye, L., Li, S. (2021). TenLa: an approach based on controllable tensor decomposition and optimized lasso regression for judgement prediction of legal cases. Applied Intelligence, 51, 2233-2252.
  • Ivanovs, M., Kadikis, R., Ozols, K. (2021). Perturbation-based methods for explaining deep neural networks: A survey. Pattern Recognition Letters, 150, 228-234.
  • Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Y. (2017). Artificial intelligence in healthcare: past, present and future. Stroke and vascular neurology, 2(4).
  • Jiang, H., He, Z., Ye, G., Zhang, H. (2020). Network intrusion detection based on PSO-XGBoost model. IEEE Access, 8, 58392-58401.
  • Katz, D. M., Bommarito, M. J., Blackman, J. (2017). A general approach for predicting the behavior of the Supreme Court of the United States. PloS one, 12(4).
  • Kaur, A., Bozic, B. (2019). Convolutional Neural Network-based Automatic Prediction of Judgments of the European Court of Human Rights. In: AICS, pp 458-469
  • Kenny, E. M., Ford, C., Quinn, M., Keane, M. T. (2021). Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies. Artificial Intelligence, 294, 103459.
  • Knox, J. (2020). Artificial intelligence and education in China. Learning, Media and Technology, 45(3), 298-311.
  • Labin, S., Segal, U. (2021). AI-driven contract review: A product development journey. In Research Handbook on Big Data Law, 454-466, Edward Elgar Publishing.
  • Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., Müller, K. R. (2019). Unmasking Clever Hans predictors and assessing what machines really learn. Nature communications, 10(1), 1096.
  • Letham, B., Rudin, C., McCormick, T. H., Madigan, D. (2012). Building interpretable classifiers with rules using Bayesian analysis. Department of Statistics Technical Report tr609, University of Washington, 9(3), 1350-1371.
  • Li, S., Zhang, H., Ye, L., Guo, X., Fang, B. (2019). Mann: A multichannel attentive neural network for legal judgment prediction. IEEE Access, 7, 151144-151155.
  • Lin, Y. S., Lee, W. C., Celik, Z. B. (2021). What do you see? Evaluation of explainable artificial intelligence (XAI) interpretability through neural backdoors. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining (pp. 1027-1035).
  • Long, S., Tu, C., Liu, Z., Sun, M. (2019). Automatic judgment prediction via legal reading comprehension. In: Chinese Computational Linguistics: 18th China National Conference, 558-572, Springer International Publishing.
  • Loureiro, S. M. C., Guerreiro, J., Tussyadiah, I. (2021). Artificial intelligence in business: State of the art and future research agenda. Journal of business research, 129, 911-926.
  • Lundberg, SM., Lee, SI. (2017). A unified approach to interpreting model predictions. In: Proceedings of the 31st international conference on neural information processing systems, 30, 4768-4777.
  • Ma, L., Zhang, Y., Wang, T., Liu, X., Ye, W., Sun, C., Zhang, S. (2021). Legal judgment prediction with multi-stage case representation learning in the real court setting. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 993-1002).
  • Mangalathu, S., Hwang, S. H., Jeon, J. S. (2020). Failure mode and effects analysis of RC members based on machine-learning-based SHapley Additive exPlanations (SHAP) approach. Engineering Structures, 219, 110927.
  • Mantovani, R. G., Horváth, T., Cerri, R., Vanschoren, J., De Carvalho, A. C. (2016, October). Hyper-parameter tuning of a decision tree induction algorithm. In: 2016 5th Brazilian Conference on Intelligent Systems (BRACIS) (pp. 37-42). IEEE.
  • Meço, G., Çoştu, F. (2022). Eğitimde Yapay Zekânın Kullanılması: Betimsel İçerik Analizi Çalışması. Karadeniz Teknik Üniversitesi Sosyal Bilimler Enstitüsü Sosyal Bilimler Dergisi, 12(23), 171-193.
  • Mumcuoğlu, E., Öztürk, C. E., Ozaktas, H. M., Koç, A. (2021). Natural language processing in law: Prediction of outcomes in the higher courts of Turkey. Information Processing & Management, 58(5), 102684.
  • Mumford, J., Atkinson, K., Bench-Capon, T. (2021). Machine learning and legal argument. In: CEUR Workshop Proceedings (Vol. 2937, pp. 47-56).
  • Nanfack, G., Temple, P., Frénay, B. (2022). Constraint Enforcement on Decision Trees: A Survey. ACM Computing Surveys (CSUR), 54(10s), 1-36.
  • Nie, W., Zhang, Y., Patel, A. (2018). A theoretical explanation for perplexing behaviors of backpropagation-based visualizations. In: International Conference on Machine Learning, PMLR, 3809-3818.
  • Nikam, SS. (2015). A comparative study of classification techniques in data mining algorithms. Oriental Journal of Computer Science and Technology, 8(1), 13-19.
  • Niklaus, J., Chalkidis, I., Stürmer, M. (2021). Swiss-judgment-prediction: A multilingual legal judgment prediction benchmark. arXiv preprint arXiv:2110.00806.
  • Niklaus, J., Stürmer, M., Chalkidis, I. (2022). An Empirical Study on Cross-X Transfer for Legal Judgment Prediction. arXiv preprint arXiv:2209.12325.
  • Ribeiro, M. T., Singh, S., Guestrin, C. (2016). Model-agnostic interpretability of machine learning. arXiv preprint arXiv:1606.05386.
  • Rodríguez-Pérez, R., Bajorath, J. (2020). Interpretation of machine learning models using shapley values: application to compound potency and multi-target activity predictions. Journal of computer-aided molecular design, 34, 1013-1026.
  • Rokach, L. (2016). Decision forest: Twenty years of research. Information Fusion, 27, 111-125.
  • Roll, I., Wylie, R. (2016). Evolution and revolution in artificial intelligence in education. International Journal of Artificial Intelligence in Education, 26, 582-599.
  • Schratz, P., Muenchow, J., Iturritxa, E., Richter, J., Brenning, A. (2019). Hyperparameter tuning and performance assessment of statistical and machine-learning algorithms using spatial data. Ecological Modelling, 406, 109-120.
  • Semo, G., Bernsohn, D., Hagag, B., Hayat, G., Niklaus, J. (2022). ClassActionPrediction: A Challenging Benchmark for Legal Judgment Prediction of Class Action Cases in the US. arXiv preprint arXiv:2211.00582.
  • Sert, M. F., Yıldırım, E., Haşlak, İ. (2022). Using artificial intelligence to predict decisions of the Turkish constitutional court. Social Science Computer Review, 40(6), 1416-1435.
  • Shahid, R., Bertazzon, S., Knudtson, M. L., Ghali, W. A. (2009). Comparison of distance measures in spatial analytical modeling for health service planning. BMC health services research, 9(1), 1-14.
  • Simonyan, K., Vedaldi, A., Zisserman, A. (2013). Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.
  • Singh, A., Yadav, A., Rana, A. (2013). K-means with Three different Distance Metrics. International Journal of Computer Applications, 67(10), 13-17.
  • Socatiyanurak, V., Klangpornkun, N., Munthuli, A., Phienphanich, P., Kovudhikulrungsri, L., Saksakulkunakorn, N., Tantibundhit, C. (2021). Law-u: Legal guidance through artificial intelligence chatbot for sexual violence victims and survivors. IEEE Access, 9, 131440-131461.
  • Somvanshi, M., Chavan, P., Tambade, S., Shinde, S. V. (2016, August). A review of machine learning techniques using decision tree and support vector machine. In 2016 international conference on computing communication control and automation (ICCUBEA) (pp. 1-7). IEEE.
  • Spinner, T., Schlegel, U., Schäfer, H., El-Assady, M. (2019). explAIner: A visual analytics framework for interactive and explainable machine learning. IEEE transactions on visualization and computer graphics, 26(1), 1064-1074.
  • Stevenson, D., Wagoner, N. J. (2015). Bargaining in the shadow of big data. Fla. L. Rev., 67, 1337.
  • Strickson, B., De La Iglesia, B. (2020, March). Legal judgement prediction for uk courts. In Proceedings of the 3rd International Conference on Information Science and Systems (pp. 204-209).
  • Tideman, L. E., Migas, L. G., Djambazova, K. V., Patterson, N. H., Caprioli, R. M., Spraggins, J. M., Van de Plas, R. (2021). Automated biomarker candidate discovery in imaging mass spectrometry data through spatially localized Shapley additive explanations. Analytica Chimica Acta, 1177, 338522.
  • Tritscher, J., Ring, M., Schlr, D., Hettinger, L., & Hotho, A. (2020). Evaluation of post-hoc XAI approaches through synthetic tabular data. In Foundations of Intelligent Systems: 25th International Symposium, ISMIS 2020, Graz, Austria, September 23–25, 2020, Proceedings (pp. 422-430). Springer International Publishing.
  • Turan. T., Turan, G., Köse, U. (2022). Uyarlamalı Ağ Tabanlı Bulanık Mantık Çıkarım Sistemi ve Yapay Sinir Ağları ile Türkiye’deki COVID-19 Vefat Sayısının Tahmin Edilmesi. Bilişim Teknolojileri Dergisi 15(2), 97-105.
  • Weber, T., Wermter, S. (2020). Integrating intrinsic and extrinsic explainability: The relevance of understanding neural networks for human-robot interaction. arXiv preprint arXiv:2010.04602.
  • Xiao, C., Zhong, H., Guo, Z., Tu, C., Liu, Z., Sun, M., Xu, J. (2018). Cail2018: A large-scale legal dataset for judgment prediction. arXiv preprint arXiv:1807.02478.
  • Xu, Z. (2022). Human judges in the era of artificial intelligence: challenges and opportunities. Applied Artificial Intelligence, 36(1), 2013652.
  • Yan, G., Li, Y., Shen, S., Zhang, S., Liu, J. (2019, July). Law article prediction based on deep learning. In 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C) (pp. 281-284). IEEE.
  • Yang, L., Zeng, J., Peng, T., Luo, X., Zhang, J., Lin, H. (2020, October). Leniency to those who confess. Predicting the Legal Judgement via Multi-Modal Analysis. In Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 645-649).
  • Yu, K. H., Beam, A. L., Kohane, I. S. (2018). Artificial intelligence in healthcare. Nature biomedical engineering, 2(10), 719-731.
  • Zadgaonkar, A. V., Agrawal, A. J. (2021). An overview of information extraction techniques for legal document analysis and processing. International Journal of Electrical & Computer Engineering (2088-8708), 11(6).
  • Zeiler, M. D., Fergus, R. (2014). Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13 (pp. 818-833). Springer International Publishing.
  • Zhong, H., Guo, Z., Tu, C., Xiao, C., Liu, Z., Sun, M. (2018). Legal judgment prediction via topological learning. In Proceedings of the 2018 conference on empirical methods in natural language processing (pp. 3540-3549).
Toplam 78 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Yazılım Mühendisliği (Diğer)
Bölüm Araştırma Makaleleri
Yazarlar

Tülay Turan 0000-0002-0888-0343

Ecir Küçüksille 0000-0002-3293-9878

Nazan Kemaloğlu Alagöz 0000-0002-6262-4244

Erken Görünüm Tarihi 30 Eylül 2023
Yayımlanma Tarihi 30 Eylül 2023
Kabul Tarihi 15 Eylül 2023
Yayımlandığı Sayı Yıl 2023 Cilt: 7 Sayı: 2

Kaynak Göster

APA Turan, T., Küçüksille, E., & Kemaloğlu Alagöz, N. (2023). Prediction of Turkish Constitutional Court Decisions with Explainable Artificial Intelligence. Bilge International Journal of Science and Technology Research, 7(2), 128-141. https://doi.org/10.30516/bilgesci.1317525