Araştırma Makalesi

Determining the Educational Neuroscience Literacy of Generative AI

Cilt: 28 Sayı: 2026 28 Nisan 2026
PDF İndir
TR EN

Determining the Educational Neuroscience Literacy of Generative AI

Abstract

This study examines the ability of generative artificial intelligence (GenAI) models to provide accurate information about the structure and functions of the brain, as well as their performance in the face of misinformation and misconceptions (neuromyths). In this context, 41 items from an Educational Neuroscience questionnaire were presented to five GenAI models—ChatGPT, Claude, Copilot, Deepseek, and Gemini—in both English and Turkish. Each model was asked to respond to each item using one of three options: “True”, “False”, or “I don’t know”. The questionnaire was administered twice. The findings showed that the GenAI models performed well in providing correct information about the brain's structure and functions. However, when it came to questions involving neuromyths, their performance was found to be at a moderate level. In both areas, the language of the questionnaire was identified as a significant factor affecting the accuracy of responses. A closer look at the models' response patterns revealed a tendency to trust incorrect information and to accept popular beliefs as accurate. This suggests that GenAI models may help counter neuromyths, while also potentially contributing to their dissemination. Consequently, the study emphasizes the need to critically evaluate GenAI-generated responses, particularly those concerning neuromyths.

Keywords

Etik Beyan

Bu çalışma etik kurul onayı gerektirmez. Çalışmanın tüm süreci boyunca Yayın Etiği Komitesi (COPE) tarafından belirlenen kurallara uyulmuştur.

Kaynakça

  1. Adıgüzel, O. C., Kara, D. A., & Küçükkayhan, S. (2024). Doğru bildiğimiz yanlışlar. Eğitim sistemlerinde nöromit salgını. Enstitü Sosyal.
  2. Amran, M. S., & Sommer, W. (2025). Seen through teachers’ eyes: Neuromyths and their application in Malaysian classrooms. Trends in Neuroscience and Education, 38, 1-7. https://doi.org/10.1016/j.tine.2025.100250
  3. Avşar, B. (2025). Artificial intelligence and disinformation: A comprehensive analysis of global disinformation campaigns underpinned by the OpenAI report. TRT Academy, 10(23), 208-237. https://doi.org/10.37679/trta.1562358
  4. Barman, D., Guo, Z., & Conlan, O. (2024). The dark side of language models: Exploring the potential of llms in multimedia disinformation generation and dissemination. Machine Learning with Applications, 16, 1-17. https://doi.org/10.1016/j.mlwa.2024.100545
  5. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, New York, USA. https://doi.org/10.1145/3442188.3445922
  6. Blanchette S. J., Riopel, M., & Masson, S. (2019). Neuromyths and their origin among teachers in Quebec. Mind, Brain, and Education, 13(2), 100–109. https://doi.org/10.1111/mbe.12193
  7. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P.,... & Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901. https://arxiv.org/abs/2005.14165v4
  8. Can, A. (2019). SPSS ile bilimsel araştırma sürecinde nicel veri analizi. Pegem Publishing.

Ayrıntılar

Birincil Dil

İngilizce

Konular

Öğretim Teknolojileri , Eğitim Teknolojisi ve Bilgi İşlem , Eğitim Üzerine Çalışmalar (Diğer)

Bölüm

Araştırma Makalesi

Yayımlanma Tarihi

28 Nisan 2026

Gönderilme Tarihi

4 Eylül 2025

Kabul Tarihi

2 Mart 2026

Yayımlandığı Sayı

Yıl 2026 Cilt: 28 Sayı: 2026

Kaynak Göster

APA
Saygıner, Ş. (2026). Determining the Educational Neuroscience Literacy of Generative AI. Erzincan Üniversitesi Eğitim Fakültesi Dergisi, 28(2026). https://doi.org/10.17556/erziefd.1777914