Derleme
BibTex RIS Kaynak Göster

Şeffaflıktan Güvene: Eğitim Sistemlerinde Açıklanabilir Yapay Zeka Üzerine Bir Literatür İncelemesi

Yıl 2025, Cilt: 21 Sayı: 3, 999 - 1018, 22.12.2025
https://doi.org/10.17860/mersinefd.1739347

Öz

Yapay zekâ (YZ) sistemleri eğitim deneyimlerini giderek daha fazla şekillendirdikçe, algoritmik karar alma süreçlerinde şeffaflık ve güvene olan talep, Açıklanabilir Yapay Zeka'ya (YZ) olan ilginin artmasına yol açmıştır. Bu literatür incelemesi, YZ'nin farklı ülkelerdeki eğitim ortamlarında nasıl uygulandığını inceleyerek hem pedagojik vaatlerini hem de etik gerilimlerini vurgulamaktadır. Gerçek dünya uygulamalarından ve öğrenci görüşlerinden yola çıkan çalışma, açıklanabilirliğin geri bildirim sistemlerini nasıl geliştirdiğini, öğrenci özerkliğini nasıl desteklediğini ve öğretimsel uyumu nasıl teşvik ettiğini araştırmaktadır. Bununla birlikte, bilişsel aşırı bağımlılık, eğitimci yetersizliği ve adalet yanılsaması gibi kalıcı zorlukları da belirlemektedir. Küresel vaka çalışmalarının bir sentezi aracılığıyla bu inceleme, eğitimde YZ için tasarım hususları, politika çerçeveleri ve kullanıcı merkezli uygulamalar hakkında kapsamlı bir analiz sunmaktadır. Çalışma, etik dağıtım için ayrıntılı bir kontrol listesi ve Açıklanabilir Yapay Zeka'nın eşitlikçi, insan merkezli öğrenme ortamlarına anlamlı bir şekilde katkıda bulunmasını sağlamayı amaçlayan bir dizi ileriye dönük öneriyle sonuçlanmaktadır.

Kaynakça

  • Akhmad, N. W., & Munawir, A. (2022). Improving the students’ pronunciation ability by using ELSA Speak app. IDEAS: Journal on English Language Teaching and Learning, Linguistics and Literature, 10(1), 846–857. https://doi.org/10.24256/ideas.v10i1.2868
  • Alkhatlan, A., & Kalita, J. (2019). Intelligent tutoring systems: A comprehensive historical survey with recent developments. International Journal of Computer Applications, 181(18), 1–20. https://doi.org/10.5120/ijca2019918451
  • Anggraini, A. (2022). Improving students’ pronunciation skill using ELSA Speak application. Journey: Journal of English Language and Pedagogy, 5(1), 135–141. https://doi.org/10.33503/journey.v5i1.1840
  • Barocas, S., Hardt, M., & Narayanan, A. (2020). Fairness and machine learning. fairmlbook.org. https://fairmlbook.org/pdf/fairmlbook.pdf
  • Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). “It’s reducing a human being to a percentage”: Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Paper No. 377, pp. 1–14). Association for Computing Machinery. https://doi.org/10.1145/3173574.3173951
  • Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How people learn: Brain, mind, experience, and school. National Academies Press. https://nap.nationalacademies.org/download/9853
  • Butler, D. L., & Winne, P. H. (1995). Feedback and self regulated learning: A theoretical synthesis. Review of Educational Research, 65(3), 245–281. https://doi.org/10.3102/00346543065003245
  • Chamola, V., Hassija, V., Sulthana, A. R., Ghosh, D., Dhingra, D., & Sikdar, B. (2023). A review of trustworthy and explainable artificial intelligence (XAI). IEEE Access, 11, 43129–43146. https://doi.org/10.1109/ACCESS.2023.3294569
  • Chen, L., Chen, P., & Lin, Z. (2021). Artificial intelligence in education: A review. IEEE Access, 9, 37524–37537. https://openresearch.amsterdam/image/2021/8/11/artificial_intelligence_in_education_a_review.pdf
  • Dadi, R. & Sanampudi, S. (2021). An automated essay scoring system: A systematic literature review. Artificial Intelligence Review, 55. 2495-2527. https://doi.org/10.1007/s10462-021-10068-2
  • Deck, L., Schoeffer, J., De Arteaga, M., & Kühl, N. (2024). A critical survey on fairness benefits of explainable AI. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24). ACM. https://doi.org/10.1145/3630106.3658990
  • Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. https://doi.org/10.48550/arXiv.1702.08608
  • Dourado, R. A., Rodrigues, R. L., Ferreira, N., Mello, R. F., Gomes, A. S., & Verbert, K. (2021). A teacher-facing learning analytics dashboard for process oriented feedback in online learning. In Proceedings of the 11th International Conference on Learning Analytics & Knowledge (LAK ’21) (pp. 1–8). Association for Computing Machinery. https://doi.org/10.1145/3448139.3448187
  • Embarak, O. (2025). A behaviour-driven framework for smart education: Leveraging explainable AI and IoB in personalized learning systems. Procedia Computer Science, 265, 457–466. https://doi.org/10.1016/j.procs.2025.07.205
  • European Commission. (2019). Ethics guidelines for trustworthy AI. High-Level Expert Group on Artificial Intelligence. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
  • Facione, P. A. (1990). Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction. American Philosophical Association.
  • Feldman-Maggor, Y., Cukurova, M., Kent, C., & Alexandron, G. (2025). The impact of explainable AI on teachers’ trust and acceptance of AI EdTech recommendations: The power of domain-specific explanations. International Journal of Artificial Intelligence in Education. Advance online publication. https://doi.org/10.1007/s40593-025-00486-6
  • Fütterer, T., Goldberg, P., Bühler, B., Sikimić, V., Trautwein, U., Gerjets, P., Stürmer, K., & Kasneci, E. (2025). Artificial intelligence in classroom management: A systematic review on educational purposes, technical implementations, and ethical considerations. Computers and Education: Artificial Intelligence, 9, 100483. https://doi.org/10.1016/j.caeai.2025.100483
  • GeeksforGeeks. (2021, October 29). What is Saliency Map? In GeeksforGeeks. Retrieved July 10, 2025, from https://www.geeksforgeeks.org/machine-learning/what-is-saliency-map/
  • Gradescope Guides. (2024, March 9). AI Assisted grading and answer groups. Gradescope Help Center. Retrieved July 10, 2025, from https://guides.gradescope.com/hc/en-us/articles/24838908062093-AI-Assisted-Grading-and-Answer-Groups
  • Ho, C. S. M., & Lee, J. C. K. (2025). From intuition to action: Exploring teachers’ ethical awareness in the use of AI tools in education. Computers and Education: Artificial Intelligence. Advance online publication. https://doi.org/10.1016/j.caeai.2025.100502
  • Holstein, K., McLaren, B. M., & Aleven, V. (2020). Designing for complementarity: Teacher and student needs for orchestration support in AI-enhanced classrooms. Proceedings of the 2020 ACM Conference on Learning at Scale, 43–55. https://files.eric.ed.gov/fulltext/ED594602.pdf
  • Holstein, K., Wortman Vaughan, J., Daumé, H., Dudik, M., & Wallach, H. (2019). Improving fairness in machine learning systems: What do industry practitioners need? Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3290605.3300830
  • Hur, P., Lee, H., Bhat, S., & Bosch, N. (2022, July). Using machine learning explainability methods to personalize interventions for students. In A. Mitrovic & N. Bosch (Eds.), Proceedings of the 15th International Conference on Educational Data Mining (pp. 438–445). International Educational Data Mining Society. https://doi.org/10.5281/zenodo.6853181
  • IBM. (2023, March 29). What is explainable AI (XAI)? IBM. https://www.ibm.com/think/topics/explainable-ai
  • iFLYTEK. (2025). TalkPal – AI Language Tutor. Retrieved July 10, 2025, from https://talkpal.ai/
  • Islam, M. M., Sojib, F. H., Mihad, M. F. H., Hasan, M., & Rahman, M. (2025). The integration of explainable AI in educational data mining for student academic performance prediction and support system. Telematics and Informatics Reports, 18, 100203. https://doi.org/10.1016/j.teler.2025.100203
  • Jin, F.-Y., Maheshi, B., Martinez-Maldonado, R., Gasevic, D., & Tsai, Y.-S. (2024). Scaffolding feedback literacy: Designing a feedback analytics tool with students. Journal of Learning Analytics, 11(2), 123–137. https://doi.org/10.18608/jla.2024.8339
  • Johora, F. T., Hasan, M. N., Rajbongshi, A., Ashrafuzzaman, M., & Akter, F. (2025). An explainable AI-based approach for predicting undergraduate students’ academic performance. Array, 26, 100384. https://doi.org/10.1016/j.array.2025.100384
  • Karayev, S., & Gutowski, K. (2020). Design principles of AI-assisted grading: Expert insights. Turnitin Tech Talk. Retrieved July 03, 2025, from https://www.turnitin.com/blog/design-principles-of-ai-assisted-grading
  • Ke, Z., & Ng, V. (2021). Automated essay scoring: A survey of the state of the art. Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI-21), 4927–4935. https://www.ijcai.org/proceedings/2019/0879.pdf
  • Kholis, A. (2021). Elsa speak app: Automatic speech recognition (ASR) for supplementing English pronunciation skills. Pedagogy: Journal of English Language Teaching, 9(1), 1–14. https://doi.org/10.32332/joelt.v9i1.2723
  • Khosravi, H., Buckingham Shum, S., Chen, G., Conati, C., Tsai, Y S., Kay, J., Knight, S., Martinez Maldonado, R., Sadiq, S., & Gašević, D. (2022). Explainable artificial intelligence in Education. Computers and Education: Artificial Intelligence, 3, Article 100074. https://doi.org/10.1016/j.caeai.2022.100074
  • Kong, S. C., & Zhu, J. (2025). Developing and validating an artificial intelligence ethical awareness scale for secondary and university students: Cultivating ethical awareness through problem-solving with artificial intelligence tools. Computers and Education: Artificial Intelligence, 9, 100447. https://doi.org/10.1016/j.caeai.2025.100447
  • KoreaTechDesk. (2019, September 6). AI in education: Korean startup Riiid’s creative disruption with Santa TOEIC learning tool. KoreaTechDesk. Retrieved June 23, 2025, from https://www.koreatechdesk.com/ai-in-education-korean-startup-riiids-creative-disruption-with-santa-toiec-learning-tool/
  • Lundberg, S. (2018). Welcome to the SHAP documentation. SHAP (SHapley Additive exPlanations). Retrieved July 10, 2025, from https://shap.readthedocs.io/en/latest/
  • Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30. https://papers.nips.cc/paper_files/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf
  • Madanchian, M., & Taherdoost, H. (2025). Decision-making criteria for AI tools in digital education. Digital Engineering, 7, 100069. https://doi.org/10.1016/j.dte.2025.100069
  • Mark My Words. (n.d.). AI-powered writing feedback for students. Retrieved May 20, 2025, from https://markmywords.au/
  • Masiello, I., Mohseni, Z., Palma, F., Nordmark, S., Augustsson, H., & Rundquist, R. (2024). A current overview of the use of learning analytics dashboards. Education Sciences, 14(1), 82. https://doi.org/10.3390/educsci14010082
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  • Nagy, M., & Molontay, R. (2024). Interpretable dropout prediction: Towards XAI-based personalized intervention. International Journal of Artificial Intelligence in Education, 34(1), 274–300. https://doi.org/10.1007/s40593-023-00331-8
  • Neves, J., Freeman, J., Stephenson, R., & Sotiropoulou, P. (2024). Student Academic Experience Survey 2024. Higher Education Policy Institute & Advance HE. https://www.hepi.ac.uk/wp-content/uploads/2024/06/SAES-2024.pdf
  • Nguyen, A., Hong, Y., Dang, B., & Huang, X. (2024). Human-AI collaboration patterns in AI-assisted academic writing. Studies in Higher Education, 49(5), 847–864. https://doi.org/10.1080/03075079.2024.2323593
  • Papers with Code. (n.d.). Local Interpretable Model Agnostic Explanations (LIME). Retrieved July 10, 2025, from https://paperswithcode.com/method/lime
  • Patil, D., & Hurix Digital. (2024). Explainable artificial intelligence (XAI): Enhancing transparency and trust in machine learning models. Hurix Digital White Paper. https://www.researchgate.net/publication/385629166
  • Plymouth University. (2022). Introducing Turnitin Draft Coach [Blog post]. Digital Education Team. Retrieved July 10, 2025, from https://blogs.plymouth.ac.uk/digital-education/introducing-turnitin-draft-coach/
  • PowerSchool. (2025). Naviance college, career, and life readiness (CCLR) [Product overview]. PowerSchool. Retrieved July 10, 2025, from https://www.powerschool.com/solutions/college-career-and-life-readiness/naviance-cclr/
  • PR Newswire. (2022, December 16). iFLYTEK AI Education Solutions reach 3,600 schools [Press release]. Retrieved April 02, 2025, from https://www.prnewswire.com/news-releases/iflytek-ai-education-solutions-reach-3-600-schools-301705361.html
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144. http://dx.doi.org/10.1145/2939672.2939778
  • Riiid Inc. (2024). Riiid for Classroom: AI-powered learning insights [Company overview]. Retrieved May 25, 2025, from https://www.riiid.com/solution
  • Rudin, C. (2019). Stop explaining black box machine learning models for high-stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x
  • Salido, A., Syarif, I., Sitepu, M. S., Suparjan, S., & Melisa, R. (2025). Integrating critical thinking and artificial intelligence in higher education: A bibliometric and systematic review of skills and strategies. Social Sciences & Humanities Open, 12, 101924. https://doi.org/10.1016/j.ssaho.2025.101924
  • Salvi, M., Seoni, S., Campagner, A., Gertych, A., & Acharya, U. R. (2025). Explainability and uncertainty: Two sides of the same coin for enhancing the interpretability of deep learning models in healthcare. International Journal of Medical Informatics, (in press). https://doi.org/10.1016/j.ijmedinf.2025.105846
  • Samek, W., Wiegand, T., & Müller, K.-R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. IEEE Signal Processing Magazine, 35(4), 82–104.
  • Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Polity Press. https://research.monash.edu/en/publications/should-robots-replace-teachers-ai-and-the-future-of-education
  • Shafik, W. (2024). Towards trustworthy and explainable AI educational systems. In Explainable AI for Education: Recent Trends and Challenges (pp. 17–41). Springer, Cham. https://doi.org/10.1007/978-3-031-72410-7_2
  • Shermis, M. D., & Burstein, J. (Eds.). (2013). Handbook of automated essay evaluation: Current applications and new directions. Routledge.
  • Sholekhah, M. F., & Fakhrurriana, R. (2023). The use of ELSA Speak as a mobile assisted language learning (MALL) tool toward EFL students’ pronunciation. JELITA: Journal of Education, Language Innovation and Applied Linguistics, 2(2). https://doi.org/10.37058/jelita.v2i2.7596
  • Stoian, V.-G. (2023). Students’ trust in automated grading through explainable AI visualizations [Bachelor’s thesis, University of Twente]. University of Twente Institutional Repository. Retrieved from https://essay.utwente.nl/95853/1/Stoian_BA_EEMCS.pdf
  • Subhash, B. (2022, March 6). Explainable AI: Saliency maps. Medium. Retrieved July 10, 2025, from https://medium.com/@bijil.subhash/explainable-ai-saliency-maps-89098e230100
  • Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive load theory. Springer. https://link.springer.com/book/10.1007/978-1-4419-8126-4
  • Tiwari, A. (2025). Bridging the career guidance gap in India: A tech-driven approach with CareerNeeti. International Journal of Scientific Research in Engineering and Management, 9(05), 1–9. https://doi.org/10.55041/IJSREM46727
  • Turnitin Draft Coach. (2025). Draft Coach product features. Turnitin. Retrieved July 06, 2025, from https://www.turnitin.ph/products/features/draft-coach/
  • Turnitin. (2023). Immediate formative feedback with Draft Coach develops writing and research skills [Case study]. Retrieved June 02, 2025, from https://www.turnitin.co.uk/case-studies/immediate-formative-feedback-with-draft-coach-develops-writing-and-research-skills
  • UNESCO. (2021). Recommendation on the ethics of artificial intelligence. https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence
  • Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.
  • Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Metacognition in educational theory and practice (pp. 277–304). Lawrence Erlbaum Associates. https://www.researchgate.net/publication/247664651
  • Xinhui Technology (2025). SpeakBuddy: English Speaking. App Store. Retrieved July 10, 2025.
  • Yudelson, M. V., Koedinger, K. R., & Gordon, G. J. (2013). Individualized Bayesian Knowledge Tracing models. In Artificial Intelligence in Education (pp. 171–180). Springer. https://doi.org/10.1007/978-3-642-39112-5_18
  • Zhang, H., Lee, I., Ali, S., DiPaola, D., & Breazeal, C. (2023). Integrating ethics and career futures with technical learning to promote AI literacy for middle school students: An exploratory study. International Journal of Artificial Intelligence in Education, 33, 290–324. https://doi.org/10.1007/s40593-022-00293-3
  • Zhou, D., Bischl, B., & Molnar, C. (2021). Designing explainable AI interfaces for education: A review of user needs and technical capabilities. Journal of Learning Analytics, 8(1), 43–59.
  • Zhu, Y., & Wan Hussain, W. M. H. W. (2025). Artificial intelligence (AI) awareness (2019–2025): A systematic literature review using the SPAR-4-SLR protocol. Social Sciences & Humanities Open, 12, 101870. https://doi.org/10.1016/j.ssaho.2025.101870
  • Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory into Practice, 41(2), 64–70. https://doi.org/10.1207/s15430421tip4102_2

From Transparency to Trust: A Literature Review on Explainable AI in Educational Systems

Yıl 2025, Cilt: 21 Sayı: 3, 999 - 1018, 22.12.2025
https://doi.org/10.17860/mersinefd.1739347

Öz

As artificial intelligence (AI) systems increasingly shape educational experiences, the demand for transparency and trust in algorithmic decision-making has led to a growing interest in Explainable AI (XAI). This literature review examines how XAI is being implemented in educational contexts across different countries, highlighting both its pedagogical promises and ethical tensions. Drawing from real-world applications and student reflections, the study explores how explainability enhances feedback systems, supports learner autonomy, and fosters instructional alignment. However, it also identifies persistent challenges, including cognitive over-reliance, educator deskilling, and the illusion of fairness. Through a synthesis of global case studies, this review offers a comprehensive analysis of design considerations, policy frameworks, and user-centered practices for XAI in education. The study concludes with a detailed checklist for ethical deployment and a set of forward-looking recommendations aimed at ensuring that XAI contributes meaningfully to equitable, human-centered learning environments.

Kaynakça

  • Akhmad, N. W., & Munawir, A. (2022). Improving the students’ pronunciation ability by using ELSA Speak app. IDEAS: Journal on English Language Teaching and Learning, Linguistics and Literature, 10(1), 846–857. https://doi.org/10.24256/ideas.v10i1.2868
  • Alkhatlan, A., & Kalita, J. (2019). Intelligent tutoring systems: A comprehensive historical survey with recent developments. International Journal of Computer Applications, 181(18), 1–20. https://doi.org/10.5120/ijca2019918451
  • Anggraini, A. (2022). Improving students’ pronunciation skill using ELSA Speak application. Journey: Journal of English Language and Pedagogy, 5(1), 135–141. https://doi.org/10.33503/journey.v5i1.1840
  • Barocas, S., Hardt, M., & Narayanan, A. (2020). Fairness and machine learning. fairmlbook.org. https://fairmlbook.org/pdf/fairmlbook.pdf
  • Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). “It’s reducing a human being to a percentage”: Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Paper No. 377, pp. 1–14). Association for Computing Machinery. https://doi.org/10.1145/3173574.3173951
  • Bransford, J. D., Brown, A. L., & Cocking, R. R. (2000). How people learn: Brain, mind, experience, and school. National Academies Press. https://nap.nationalacademies.org/download/9853
  • Butler, D. L., & Winne, P. H. (1995). Feedback and self regulated learning: A theoretical synthesis. Review of Educational Research, 65(3), 245–281. https://doi.org/10.3102/00346543065003245
  • Chamola, V., Hassija, V., Sulthana, A. R., Ghosh, D., Dhingra, D., & Sikdar, B. (2023). A review of trustworthy and explainable artificial intelligence (XAI). IEEE Access, 11, 43129–43146. https://doi.org/10.1109/ACCESS.2023.3294569
  • Chen, L., Chen, P., & Lin, Z. (2021). Artificial intelligence in education: A review. IEEE Access, 9, 37524–37537. https://openresearch.amsterdam/image/2021/8/11/artificial_intelligence_in_education_a_review.pdf
  • Dadi, R. & Sanampudi, S. (2021). An automated essay scoring system: A systematic literature review. Artificial Intelligence Review, 55. 2495-2527. https://doi.org/10.1007/s10462-021-10068-2
  • Deck, L., Schoeffer, J., De Arteaga, M., & Kühl, N. (2024). A critical survey on fairness benefits of explainable AI. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24). ACM. https://doi.org/10.1145/3630106.3658990
  • Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. https://doi.org/10.48550/arXiv.1702.08608
  • Dourado, R. A., Rodrigues, R. L., Ferreira, N., Mello, R. F., Gomes, A. S., & Verbert, K. (2021). A teacher-facing learning analytics dashboard for process oriented feedback in online learning. In Proceedings of the 11th International Conference on Learning Analytics & Knowledge (LAK ’21) (pp. 1–8). Association for Computing Machinery. https://doi.org/10.1145/3448139.3448187
  • Embarak, O. (2025). A behaviour-driven framework for smart education: Leveraging explainable AI and IoB in personalized learning systems. Procedia Computer Science, 265, 457–466. https://doi.org/10.1016/j.procs.2025.07.205
  • European Commission. (2019). Ethics guidelines for trustworthy AI. High-Level Expert Group on Artificial Intelligence. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
  • Facione, P. A. (1990). Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction. American Philosophical Association.
  • Feldman-Maggor, Y., Cukurova, M., Kent, C., & Alexandron, G. (2025). The impact of explainable AI on teachers’ trust and acceptance of AI EdTech recommendations: The power of domain-specific explanations. International Journal of Artificial Intelligence in Education. Advance online publication. https://doi.org/10.1007/s40593-025-00486-6
  • Fütterer, T., Goldberg, P., Bühler, B., Sikimić, V., Trautwein, U., Gerjets, P., Stürmer, K., & Kasneci, E. (2025). Artificial intelligence in classroom management: A systematic review on educational purposes, technical implementations, and ethical considerations. Computers and Education: Artificial Intelligence, 9, 100483. https://doi.org/10.1016/j.caeai.2025.100483
  • GeeksforGeeks. (2021, October 29). What is Saliency Map? In GeeksforGeeks. Retrieved July 10, 2025, from https://www.geeksforgeeks.org/machine-learning/what-is-saliency-map/
  • Gradescope Guides. (2024, March 9). AI Assisted grading and answer groups. Gradescope Help Center. Retrieved July 10, 2025, from https://guides.gradescope.com/hc/en-us/articles/24838908062093-AI-Assisted-Grading-and-Answer-Groups
  • Ho, C. S. M., & Lee, J. C. K. (2025). From intuition to action: Exploring teachers’ ethical awareness in the use of AI tools in education. Computers and Education: Artificial Intelligence. Advance online publication. https://doi.org/10.1016/j.caeai.2025.100502
  • Holstein, K., McLaren, B. M., & Aleven, V. (2020). Designing for complementarity: Teacher and student needs for orchestration support in AI-enhanced classrooms. Proceedings of the 2020 ACM Conference on Learning at Scale, 43–55. https://files.eric.ed.gov/fulltext/ED594602.pdf
  • Holstein, K., Wortman Vaughan, J., Daumé, H., Dudik, M., & Wallach, H. (2019). Improving fairness in machine learning systems: What do industry practitioners need? Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3290605.3300830
  • Hur, P., Lee, H., Bhat, S., & Bosch, N. (2022, July). Using machine learning explainability methods to personalize interventions for students. In A. Mitrovic & N. Bosch (Eds.), Proceedings of the 15th International Conference on Educational Data Mining (pp. 438–445). International Educational Data Mining Society. https://doi.org/10.5281/zenodo.6853181
  • IBM. (2023, March 29). What is explainable AI (XAI)? IBM. https://www.ibm.com/think/topics/explainable-ai
  • iFLYTEK. (2025). TalkPal – AI Language Tutor. Retrieved July 10, 2025, from https://talkpal.ai/
  • Islam, M. M., Sojib, F. H., Mihad, M. F. H., Hasan, M., & Rahman, M. (2025). The integration of explainable AI in educational data mining for student academic performance prediction and support system. Telematics and Informatics Reports, 18, 100203. https://doi.org/10.1016/j.teler.2025.100203
  • Jin, F.-Y., Maheshi, B., Martinez-Maldonado, R., Gasevic, D., & Tsai, Y.-S. (2024). Scaffolding feedback literacy: Designing a feedback analytics tool with students. Journal of Learning Analytics, 11(2), 123–137. https://doi.org/10.18608/jla.2024.8339
  • Johora, F. T., Hasan, M. N., Rajbongshi, A., Ashrafuzzaman, M., & Akter, F. (2025). An explainable AI-based approach for predicting undergraduate students’ academic performance. Array, 26, 100384. https://doi.org/10.1016/j.array.2025.100384
  • Karayev, S., & Gutowski, K. (2020). Design principles of AI-assisted grading: Expert insights. Turnitin Tech Talk. Retrieved July 03, 2025, from https://www.turnitin.com/blog/design-principles-of-ai-assisted-grading
  • Ke, Z., & Ng, V. (2021). Automated essay scoring: A survey of the state of the art. Proceedings of the 30th International Joint Conference on Artificial Intelligence (IJCAI-21), 4927–4935. https://www.ijcai.org/proceedings/2019/0879.pdf
  • Kholis, A. (2021). Elsa speak app: Automatic speech recognition (ASR) for supplementing English pronunciation skills. Pedagogy: Journal of English Language Teaching, 9(1), 1–14. https://doi.org/10.32332/joelt.v9i1.2723
  • Khosravi, H., Buckingham Shum, S., Chen, G., Conati, C., Tsai, Y S., Kay, J., Knight, S., Martinez Maldonado, R., Sadiq, S., & Gašević, D. (2022). Explainable artificial intelligence in Education. Computers and Education: Artificial Intelligence, 3, Article 100074. https://doi.org/10.1016/j.caeai.2022.100074
  • Kong, S. C., & Zhu, J. (2025). Developing and validating an artificial intelligence ethical awareness scale for secondary and university students: Cultivating ethical awareness through problem-solving with artificial intelligence tools. Computers and Education: Artificial Intelligence, 9, 100447. https://doi.org/10.1016/j.caeai.2025.100447
  • KoreaTechDesk. (2019, September 6). AI in education: Korean startup Riiid’s creative disruption with Santa TOEIC learning tool. KoreaTechDesk. Retrieved June 23, 2025, from https://www.koreatechdesk.com/ai-in-education-korean-startup-riiids-creative-disruption-with-santa-toiec-learning-tool/
  • Lundberg, S. (2018). Welcome to the SHAP documentation. SHAP (SHapley Additive exPlanations). Retrieved July 10, 2025, from https://shap.readthedocs.io/en/latest/
  • Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30. https://papers.nips.cc/paper_files/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf
  • Madanchian, M., & Taherdoost, H. (2025). Decision-making criteria for AI tools in digital education. Digital Engineering, 7, 100069. https://doi.org/10.1016/j.dte.2025.100069
  • Mark My Words. (n.d.). AI-powered writing feedback for students. Retrieved May 20, 2025, from https://markmywords.au/
  • Masiello, I., Mohseni, Z., Palma, F., Nordmark, S., Augustsson, H., & Rundquist, R. (2024). A current overview of the use of learning analytics dashboards. Education Sciences, 14(1), 82. https://doi.org/10.3390/educsci14010082
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  • Nagy, M., & Molontay, R. (2024). Interpretable dropout prediction: Towards XAI-based personalized intervention. International Journal of Artificial Intelligence in Education, 34(1), 274–300. https://doi.org/10.1007/s40593-023-00331-8
  • Neves, J., Freeman, J., Stephenson, R., & Sotiropoulou, P. (2024). Student Academic Experience Survey 2024. Higher Education Policy Institute & Advance HE. https://www.hepi.ac.uk/wp-content/uploads/2024/06/SAES-2024.pdf
  • Nguyen, A., Hong, Y., Dang, B., & Huang, X. (2024). Human-AI collaboration patterns in AI-assisted academic writing. Studies in Higher Education, 49(5), 847–864. https://doi.org/10.1080/03075079.2024.2323593
  • Papers with Code. (n.d.). Local Interpretable Model Agnostic Explanations (LIME). Retrieved July 10, 2025, from https://paperswithcode.com/method/lime
  • Patil, D., & Hurix Digital. (2024). Explainable artificial intelligence (XAI): Enhancing transparency and trust in machine learning models. Hurix Digital White Paper. https://www.researchgate.net/publication/385629166
  • Plymouth University. (2022). Introducing Turnitin Draft Coach [Blog post]. Digital Education Team. Retrieved July 10, 2025, from https://blogs.plymouth.ac.uk/digital-education/introducing-turnitin-draft-coach/
  • PowerSchool. (2025). Naviance college, career, and life readiness (CCLR) [Product overview]. PowerSchool. Retrieved July 10, 2025, from https://www.powerschool.com/solutions/college-career-and-life-readiness/naviance-cclr/
  • PR Newswire. (2022, December 16). iFLYTEK AI Education Solutions reach 3,600 schools [Press release]. Retrieved April 02, 2025, from https://www.prnewswire.com/news-releases/iflytek-ai-education-solutions-reach-3-600-schools-301705361.html
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144. http://dx.doi.org/10.1145/2939672.2939778
  • Riiid Inc. (2024). Riiid for Classroom: AI-powered learning insights [Company overview]. Retrieved May 25, 2025, from https://www.riiid.com/solution
  • Rudin, C. (2019). Stop explaining black box machine learning models for high-stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x
  • Salido, A., Syarif, I., Sitepu, M. S., Suparjan, S., & Melisa, R. (2025). Integrating critical thinking and artificial intelligence in higher education: A bibliometric and systematic review of skills and strategies. Social Sciences & Humanities Open, 12, 101924. https://doi.org/10.1016/j.ssaho.2025.101924
  • Salvi, M., Seoni, S., Campagner, A., Gertych, A., & Acharya, U. R. (2025). Explainability and uncertainty: Two sides of the same coin for enhancing the interpretability of deep learning models in healthcare. International Journal of Medical Informatics, (in press). https://doi.org/10.1016/j.ijmedinf.2025.105846
  • Samek, W., Wiegand, T., & Müller, K.-R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. IEEE Signal Processing Magazine, 35(4), 82–104.
  • Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Polity Press. https://research.monash.edu/en/publications/should-robots-replace-teachers-ai-and-the-future-of-education
  • Shafik, W. (2024). Towards trustworthy and explainable AI educational systems. In Explainable AI for Education: Recent Trends and Challenges (pp. 17–41). Springer, Cham. https://doi.org/10.1007/978-3-031-72410-7_2
  • Shermis, M. D., & Burstein, J. (Eds.). (2013). Handbook of automated essay evaluation: Current applications and new directions. Routledge.
  • Sholekhah, M. F., & Fakhrurriana, R. (2023). The use of ELSA Speak as a mobile assisted language learning (MALL) tool toward EFL students’ pronunciation. JELITA: Journal of Education, Language Innovation and Applied Linguistics, 2(2). https://doi.org/10.37058/jelita.v2i2.7596
  • Stoian, V.-G. (2023). Students’ trust in automated grading through explainable AI visualizations [Bachelor’s thesis, University of Twente]. University of Twente Institutional Repository. Retrieved from https://essay.utwente.nl/95853/1/Stoian_BA_EEMCS.pdf
  • Subhash, B. (2022, March 6). Explainable AI: Saliency maps. Medium. Retrieved July 10, 2025, from https://medium.com/@bijil.subhash/explainable-ai-saliency-maps-89098e230100
  • Sweller, J., Ayres, P., & Kalyuga, S. (2011). Cognitive load theory. Springer. https://link.springer.com/book/10.1007/978-1-4419-8126-4
  • Tiwari, A. (2025). Bridging the career guidance gap in India: A tech-driven approach with CareerNeeti. International Journal of Scientific Research in Engineering and Management, 9(05), 1–9. https://doi.org/10.55041/IJSREM46727
  • Turnitin Draft Coach. (2025). Draft Coach product features. Turnitin. Retrieved July 06, 2025, from https://www.turnitin.ph/products/features/draft-coach/
  • Turnitin. (2023). Immediate formative feedback with Draft Coach develops writing and research skills [Case study]. Retrieved June 02, 2025, from https://www.turnitin.co.uk/case-studies/immediate-formative-feedback-with-draft-coach-develops-writing-and-research-skills
  • UNESCO. (2021). Recommendation on the ethics of artificial intelligence. https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence
  • Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.
  • Winne, P. H., & Hadwin, A. F. (1998). Studying as self-regulated learning. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Metacognition in educational theory and practice (pp. 277–304). Lawrence Erlbaum Associates. https://www.researchgate.net/publication/247664651
  • Xinhui Technology (2025). SpeakBuddy: English Speaking. App Store. Retrieved July 10, 2025.
  • Yudelson, M. V., Koedinger, K. R., & Gordon, G. J. (2013). Individualized Bayesian Knowledge Tracing models. In Artificial Intelligence in Education (pp. 171–180). Springer. https://doi.org/10.1007/978-3-642-39112-5_18
  • Zhang, H., Lee, I., Ali, S., DiPaola, D., & Breazeal, C. (2023). Integrating ethics and career futures with technical learning to promote AI literacy for middle school students: An exploratory study. International Journal of Artificial Intelligence in Education, 33, 290–324. https://doi.org/10.1007/s40593-022-00293-3
  • Zhou, D., Bischl, B., & Molnar, C. (2021). Designing explainable AI interfaces for education: A review of user needs and technical capabilities. Journal of Learning Analytics, 8(1), 43–59.
  • Zhu, Y., & Wan Hussain, W. M. H. W. (2025). Artificial intelligence (AI) awareness (2019–2025): A systematic literature review using the SPAR-4-SLR protocol. Social Sciences & Humanities Open, 12, 101870. https://doi.org/10.1016/j.ssaho.2025.101870
  • Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory into Practice, 41(2), 64–70. https://doi.org/10.1207/s15430421tip4102_2
Toplam 74 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Eğitim Teknolojisi ve Bilgi İşlem
Bölüm Derleme
Yazarlar

Hasan Tınmaz 0000-0003-4310-0848

Gönderilme Tarihi 10 Temmuz 2025
Kabul Tarihi 25 Kasım 2025
Yayımlanma Tarihi 22 Aralık 2025
Yayımlandığı Sayı Yıl 2025 Cilt: 21 Sayı: 3

Kaynak Göster

APA Tınmaz, H. (2025). From Transparency to Trust: A Literature Review on Explainable AI in Educational Systems. Mersin Üniversitesi Eğitim Fakültesi Dergisi, 21(3), 999-1018. https://doi.org/10.17860/mersinefd.1739347

Makaleler dergide yayınlandıktan sonra yayım hakları dergiye ait olur.
Dergide yayınlanan tüm makaleler, diğerleri tarafından paylaşılmasına olanak veren Creative Commons Alıntı-Gayri Ticari-Türetilemez 4.0 Uluslararası (CC BY-NC-ND 4.0) lisansı altında lisanslanır.