Determining the Educational Neuroscience Literacy of Generative AI
Abstract
This study examines the ability of generative artificial intelligence (GenAI) models to provide accurate information about the structure and functions of the brain, as well as their performance in the face of misinformation and misconceptions (neuromyths). In this context, 41 items from an Educational Neuroscience questionnaire were presented to five GenAI models—ChatGPT, Claude, Copilot, Deepseek, and Gemini—in both English and Turkish. Each model was asked to respond to each item using one of three options: “True”, “False”, or “I don’t know”. The questionnaire was administered twice. The findings showed that the GenAI models performed well in providing correct information about the brain's structure and functions. However, when it came to questions involving neuromyths, their performance was found to be at a moderate level. In both areas, the language of the questionnaire was identified as a significant factor affecting the accuracy of responses. A closer look at the models' response patterns revealed a tendency to trust incorrect information and to accept popular beliefs as accurate. This suggests that GenAI models may help counter neuromyths, while also potentially contributing to their dissemination. Consequently, the study emphasizes the need to critically evaluate GenAI-generated responses, particularly those concerning neuromyths.
Keywords
Etik Beyan
Kaynakça
- Adıgüzel, O. C., Kara, D. A., & Küçükkayhan, S. (2024). Doğru bildiğimiz yanlışlar. Eğitim sistemlerinde nöromit salgını. Enstitü Sosyal.
- Amran, M. S., & Sommer, W. (2025). Seen through teachers’ eyes: Neuromyths and their application in Malaysian classrooms. Trends in Neuroscience and Education, 38, 1-7. https://doi.org/10.1016/j.tine.2025.100250
- Avşar, B. (2025). Artificial intelligence and disinformation: A comprehensive analysis of global disinformation campaigns underpinned by the OpenAI report. TRT Academy, 10(23), 208-237. https://doi.org/10.37679/trta.1562358
- Barman, D., Guo, Z., & Conlan, O. (2024). The dark side of language models: Exploring the potential of llms in multimedia disinformation generation and dissemination. Machine Learning with Applications, 16, 1-17. https://doi.org/10.1016/j.mlwa.2024.100545
- Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, New York, USA. https://doi.org/10.1145/3442188.3445922
- Blanchette S. J., Riopel, M., & Masson, S. (2019). Neuromyths and their origin among teachers in Quebec. Mind, Brain, and Education, 13(2), 100–109. https://doi.org/10.1111/mbe.12193
- Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P.,... & Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901. https://arxiv.org/abs/2005.14165v4
- Can, A. (2019). SPSS ile bilimsel araştırma sürecinde nicel veri analizi. Pegem Publishing.
Ayrıntılar
Birincil Dil
İngilizce
Konular
Öğretim Teknolojileri , Eğitim Teknolojisi ve Bilgi İşlem , Eğitim Üzerine Çalışmalar (Diğer)
Bölüm
Araştırma Makalesi
Yazarlar
Şenol Saygıner
*
0000-0002-5280-3847
Türkiye
Yayımlanma Tarihi
28 Nisan 2026
Gönderilme Tarihi
4 Eylül 2025
Kabul Tarihi
2 Mart 2026
Yayımlandığı Sayı
Yıl 2026 Cilt: 28 Sayı: 2026