Araştırma Makalesi
BibTex RIS Kaynak Göster

PEDODONTİ SORULARININ YANITLANMASINDA YAPAY ZEKÂ PERFORMANSINA DİLİN ETKİSİ: CHATGPT-4.0 VE DEEPSEEK-R1 İLE TÜRKÇE VE İNGİLİZCE KARŞILAŞTIRMASI

Yıl 2026, Cilt: 28 Sayı: 1 , 49 - 54 , 27.04.2026
https://doi.org/10.24938/kutfd.1784362
https://izlik.org/JA95XW57LZ

Öz

Amaç: Çalışmamızda, ChatGPT-4.0 ve DeepSeek R1 adlı iki farklı sohbet robotunun pedodonti alanındaki çoktan seçmeli sorulardaki gösterdiği başarının ve kullanılan dilin bu başarı üzerindeki etkisinin değerlendirilmesi amaçlanmıştır.
Gereç ve Yöntemler: 2 farklı yapay zekâ sohbet robotunun cevaplandırması için pedodonti konularından 20 soru oluşturuldu. Bu soruların İngilizce ve Türkçe versiyonları ChatGPT 4.0 ve DeepSeek R1 yapay zekâ sohbet robotlarına cevaplandırılması için soruldu. Alınan cevaplar doğru ve yanlış olarak kaydedildi. Analizler IBM Statistical Package for Social Sciences (SPSS) Windows version 27 (SPSS Inc. Chicago, IL, USA) programında gerçekleştirilmiştir. Çalışmada anlamlılık düzeyi p<0,05 olarak alınmıştır.
Bulgular: ChatGPT’in her iki dilde de doğruluk oranı %89 olarak bulunmuş; Deepseek’in Türkçe dilindeki doğruluk oranı %90, İngilizce dilindeki doğruluk oranı ise %92,5 olarak bulunmuştur. Model ve dil değişkeni açısından doğruluk oranı arasında istatistiksel olarak anlamlı bir fark elde edilememiştir.
Sonuç: Çalışmamızda iki modelin de Türkçe ve İngilizce cevap oranlarının benzer olduğu görülmüştür. DeepSeek İngilizce dilinde daha iyi performans göstermiştir. Kullanımlarının kolay olması ve metin tabanlı çoktan seçmeli soruları cevaplamadaki güçlü performansları göz önüne alındığında, ChatGPT-4.0 ve DeepSeek gibi Büyük Dil Modelleri diş hekimliği eğitiminde öğrenmeyi desteklemek için kullanılabilir bir araç olarak değerlendirilebilir.

Etik Beyan

Çalışma, insan ve hayvan konularını ele almadığı için etik kurul onayına ihtiyaç yoktur.

Destekleyen Kurum

Herhangi bir kurum veya kişiden finansal destek alınmamıştır.

Teşekkür

Yok

Kaynakça

  • Kaygisiz ÖF, Teke MT. Can deepseek and ChatGPT be used in the diagnosis of oral pathologies? BMC Oral Health. 2025;25(1):638.
  • Sarı MBD, Sezer B. ChatGPT-4 Omni’s accuracy in multiple-choice dentistry questions: A multidisciplinary and bilingual assessment. Essent Dent. 2025;4:1-9.
  • Mohammad‐Rahimi H, Setzer FC, Aminoshariae A, Dummer PMH, Duncan HF, Nosrat A. Artificial intelligence chatbots in endodontic education—Concepts and potential applications. Int Endod J. 2025;00:1-14.
  • Choudhury A, Shahsavar Y, Shamszare H. User Intent to use DeepSeek for healthcare purposes and their trust in the large language model: Multinational survey study. JMIR Hum Factors. 2025;12:e72867.
  • Kusaka S, Akitomo T, Hamada M, et al. Usefulness of Generative Artificial Intelligence (AI) tools in pediatric dentistry. Diagnostics. 2024;14(24):2818.
  • Kukreja P. Integration of artificial intelligence in dentistry: a systematic review of educational and clinical implications. Cureus. 2025;17(2):e79350-e79350.
  • Vishwanathaiah S, Fageeh HN, Khanagar SB, Maganur PC. Artificial intelligence its uses and application in pediatric dentistry: a review. Biomedicines. 2023;11(3):788.
  • Alessa N. Application of artificial intelligence in pediatric dentistry: a literature review. J Pharm Bioallied Sci. 2024;16(Suppl 3):S1938-S1940.
  • Ahmed WM, Azhari AA, Alfaraj A, Alhamadani A, Zhang M, Lu CT. The quality of AI-generated dental caries multiple choice questions: A comparative analysis of ChatGPT and Google Bard language models. Heliyon. 2024;10(7):e28198.
  • Fang Q, Reynaldi R, Araminta AS, et al. Artificial Intelligence (AI)-driven dental education: Exploring the role of chatbots in a clinical learning environment. J Prosthet Dent. 2024;134(4):1296-1303.
  • Lin CCC, Sun JS, Chang CH, Chang YH, Chang JZC. Performance of artificial intelligence chatbots in National dental licensing examination. J Dent Sci. 2025;20:2307-2314.
  • Kim W, Kim BC, Yeom HG. Performance of large language models on the Korean dental licensing examination: a comparative study. Int Dent J. 2025;75(1):176-184.
  • Chau RCW, Thu KM, Yu OY, Hsung RTC, Lo ECM, Lam WYH. Performance of generative artificial intelligence in dental licensing examinations. Int Dent J. 2024;74(3):616-621.
  • Oğuzman RT, Yurdabakan ZZ. Performance of chat generative pretrained transformer and bard on the questions asked in the dental specialty entrance examination in Turkey regarding bloom’s revised taxonomy. Curr Res Dent Sci. 2024;34(1):25-34.
  • Sismanoglu S, Capan BS. Performance of artificial intelligence on Turkish dental specialization exam: can ChatGPT-4.0 and gemini advanced achieve comparable results to humans? BMC Med Educ. 2025;25(1):214.
  • Yao Z, Duan L, Xu S, Chi L, Sheng D. Performance of large language models in the non-English context: qualitative study of models trained on different languages in Chinese medical examinations. JMIR Med Inform. 2025;13(1):e69485.
  • Orman S. ChatGPT’s medical exam performance: version and language analysis in general surgery fellowship exam. Eur J Hum Health. 2025;4(1):1-9.
  • Kim MG, Hwang G, Chang J, Chang S, Roh HW, Park RW. Performance of open-source large language models in psychiatry: usability study through comparative analysis of non-English records and English translations. J Med Internet Res. 2025;27:e69857.
  • Liu X, Wu J, Shao A, et al. Uncovering language disparity of ChatGPT on retinal vascular disease classification: cross-sectional study. J Med Internet Res. 2024;26:e51926.
  • Bilgin Avşar D, Ertan AA. Diş hekimliğinde uzmanlık sınavında sorulan protetik diş tedavisi sorularının ChatGPT-3.5 ve Gemini tarafından cevaplanma performanslarının karşılaştırmalı olarak incelenmesi: Kesitsel araştırma. Turk Klin J Dent Sci. 2024;30(4):668-673.
  • Mahmoud R, Shuster A, Kleinman S, Arbel S, Ianculovici C, Peleg O. Evaluating artificial intelligence chatbots in oral and maxillofacial surgery board exams: Performance and potential. J Oral Maxillofac Surg. 2025;83(3):382-389.
  • Wu YH, Tso KY, Chiang CP. Impact of language and question types on ChatGPT-4o’s performance in answering oral pathology questions from Taiwan National Dental Licensing Examinations. J Dent Sci. 2025;20(4):2176-2180.
  • Şensoy E, Çıtırık M. Okülofasiyal plastik ve orbital cerrahide İngilizce ve Türkçe dil çeşitliliğinin yapay zeka chatbot performansına etkisi: ChatGPT-3.5, Copilot ve Gemini üzerine bir çalışma. Osman Tıp Derg. 2024;46(5):781-786.
  • Büyüközer Özkan H, Doğan Çankaya T, Kölüş T. The impact of language variability on artificial intelligence performance in regenerative endodontics. Healthcare. 2025;13(10):1190.
  • Aşık A, Kuru E. Diş hekimliğinde uzmanlık eğitim giriş Sınavında sorulan çocuk diş Hekimliği sorularına ChatGPT’nin verdiği cevapların analizi: kesitsel araştırma. Turk Klin J Dent Sci. 2025;31(3):401-406.
  • Fang C, Wu Y, Fu W, et al. How does ChatGPT-4 preform on non-English national medical licensing examination? An evaluation in Chinese language. PLOS Digit Health. 2023;2(12):e0000397.

The Effect of Language on Artificial Intelligence Performance in Answering Pedodontics Questions: Comparison of Turkish and English with ChatGPT-4.0 and DeepSeek-R1

Yıl 2026, Cilt: 28 Sayı: 1 , 49 - 54 , 27.04.2026
https://doi.org/10.24938/kutfd.1784362
https://izlik.org/JA95XW57LZ

Öz

Objective: Our study aimed to evaluate the performance of two different chatbots, ChatGPT-4.0 and DeepSeek R1, on multiple-choice questions in the field of pedodontics, as well as the impact of the language used on this performance.
Material and Methods: Twenty questions on pedodontics topics were created for the two different AI chatbots to answer. English and Turkish versions of these questions were presented to the ChatGPT 4.0 and DeepSeek R1 AI chatbots. The answers were recorded as either correct or incorrect. Analyses were performed using IBM Statistical Package for Social Sciences (SPSS) Windows version 27 (SPSS Inc. Chicago, IL, USA). The significance level in this study was set at p<0.05.
Results: ChatGPT’s accuracy rate was 89% in both languages, whereas DeepSeek achieved 90% in Turkish and 92.5% in English. There was no statistically significant difference in accuracy rates across models or languages.
Conclusion: Our study found that the response rates for both models in Turkish and English were similar. DeepSeek performed better with the English language. Due to their user-friendly nature and impressive ability to answer text-based, multiple-choice questions, large language models such as ChatGPT-4.0 and DeepSeek are viable tools for supporting learning in dental education.

Etik Beyan

Since the study does not involve human or animal subjects, ethics committee approval is not required.

Destekleyen Kurum

No financial support was received from any institution or individual.

Teşekkür

None

Kaynakça

  • Kaygisiz ÖF, Teke MT. Can deepseek and ChatGPT be used in the diagnosis of oral pathologies? BMC Oral Health. 2025;25(1):638.
  • Sarı MBD, Sezer B. ChatGPT-4 Omni’s accuracy in multiple-choice dentistry questions: A multidisciplinary and bilingual assessment. Essent Dent. 2025;4:1-9.
  • Mohammad‐Rahimi H, Setzer FC, Aminoshariae A, Dummer PMH, Duncan HF, Nosrat A. Artificial intelligence chatbots in endodontic education—Concepts and potential applications. Int Endod J. 2025;00:1-14.
  • Choudhury A, Shahsavar Y, Shamszare H. User Intent to use DeepSeek for healthcare purposes and their trust in the large language model: Multinational survey study. JMIR Hum Factors. 2025;12:e72867.
  • Kusaka S, Akitomo T, Hamada M, et al. Usefulness of Generative Artificial Intelligence (AI) tools in pediatric dentistry. Diagnostics. 2024;14(24):2818.
  • Kukreja P. Integration of artificial intelligence in dentistry: a systematic review of educational and clinical implications. Cureus. 2025;17(2):e79350-e79350.
  • Vishwanathaiah S, Fageeh HN, Khanagar SB, Maganur PC. Artificial intelligence its uses and application in pediatric dentistry: a review. Biomedicines. 2023;11(3):788.
  • Alessa N. Application of artificial intelligence in pediatric dentistry: a literature review. J Pharm Bioallied Sci. 2024;16(Suppl 3):S1938-S1940.
  • Ahmed WM, Azhari AA, Alfaraj A, Alhamadani A, Zhang M, Lu CT. The quality of AI-generated dental caries multiple choice questions: A comparative analysis of ChatGPT and Google Bard language models. Heliyon. 2024;10(7):e28198.
  • Fang Q, Reynaldi R, Araminta AS, et al. Artificial Intelligence (AI)-driven dental education: Exploring the role of chatbots in a clinical learning environment. J Prosthet Dent. 2024;134(4):1296-1303.
  • Lin CCC, Sun JS, Chang CH, Chang YH, Chang JZC. Performance of artificial intelligence chatbots in National dental licensing examination. J Dent Sci. 2025;20:2307-2314.
  • Kim W, Kim BC, Yeom HG. Performance of large language models on the Korean dental licensing examination: a comparative study. Int Dent J. 2025;75(1):176-184.
  • Chau RCW, Thu KM, Yu OY, Hsung RTC, Lo ECM, Lam WYH. Performance of generative artificial intelligence in dental licensing examinations. Int Dent J. 2024;74(3):616-621.
  • Oğuzman RT, Yurdabakan ZZ. Performance of chat generative pretrained transformer and bard on the questions asked in the dental specialty entrance examination in Turkey regarding bloom’s revised taxonomy. Curr Res Dent Sci. 2024;34(1):25-34.
  • Sismanoglu S, Capan BS. Performance of artificial intelligence on Turkish dental specialization exam: can ChatGPT-4.0 and gemini advanced achieve comparable results to humans? BMC Med Educ. 2025;25(1):214.
  • Yao Z, Duan L, Xu S, Chi L, Sheng D. Performance of large language models in the non-English context: qualitative study of models trained on different languages in Chinese medical examinations. JMIR Med Inform. 2025;13(1):e69485.
  • Orman S. ChatGPT’s medical exam performance: version and language analysis in general surgery fellowship exam. Eur J Hum Health. 2025;4(1):1-9.
  • Kim MG, Hwang G, Chang J, Chang S, Roh HW, Park RW. Performance of open-source large language models in psychiatry: usability study through comparative analysis of non-English records and English translations. J Med Internet Res. 2025;27:e69857.
  • Liu X, Wu J, Shao A, et al. Uncovering language disparity of ChatGPT on retinal vascular disease classification: cross-sectional study. J Med Internet Res. 2024;26:e51926.
  • Bilgin Avşar D, Ertan AA. Diş hekimliğinde uzmanlık sınavında sorulan protetik diş tedavisi sorularının ChatGPT-3.5 ve Gemini tarafından cevaplanma performanslarının karşılaştırmalı olarak incelenmesi: Kesitsel araştırma. Turk Klin J Dent Sci. 2024;30(4):668-673.
  • Mahmoud R, Shuster A, Kleinman S, Arbel S, Ianculovici C, Peleg O. Evaluating artificial intelligence chatbots in oral and maxillofacial surgery board exams: Performance and potential. J Oral Maxillofac Surg. 2025;83(3):382-389.
  • Wu YH, Tso KY, Chiang CP. Impact of language and question types on ChatGPT-4o’s performance in answering oral pathology questions from Taiwan National Dental Licensing Examinations. J Dent Sci. 2025;20(4):2176-2180.
  • Şensoy E, Çıtırık M. Okülofasiyal plastik ve orbital cerrahide İngilizce ve Türkçe dil çeşitliliğinin yapay zeka chatbot performansına etkisi: ChatGPT-3.5, Copilot ve Gemini üzerine bir çalışma. Osman Tıp Derg. 2024;46(5):781-786.
  • Büyüközer Özkan H, Doğan Çankaya T, Kölüş T. The impact of language variability on artificial intelligence performance in regenerative endodontics. Healthcare. 2025;13(10):1190.
  • Aşık A, Kuru E. Diş hekimliğinde uzmanlık eğitim giriş Sınavında sorulan çocuk diş Hekimliği sorularına ChatGPT’nin verdiği cevapların analizi: kesitsel araştırma. Turk Klin J Dent Sci. 2025;31(3):401-406.
  • Fang C, Wu Y, Fu W, et al. How does ChatGPT-4 preform on non-English national medical licensing examination? An evaluation in Chinese language. PLOS Digit Health. 2023;2(12):e0000397.
Toplam 26 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Sağlık Hizmetleri ve Sistemleri (Diğer)
Bölüm Araştırma Makalesi
Yazarlar

Esra Hato 0000-0002-9105-8448

Koray Peker Bu kişi benim 0009-0008-0077-5670

Gönderilme Tarihi 15 Eylül 2025
Kabul Tarihi 23 Ocak 2026
Yayımlanma Tarihi 27 Nisan 2026
DOI https://doi.org/10.24938/kutfd.1784362
IZ https://izlik.org/JA95XW57LZ
Yayımlandığı Sayı Yıl 2026 Cilt: 28 Sayı: 1

Kaynak Göster

APA Hato, E., & Peker, K. (2026). PEDODONTİ SORULARININ YANITLANMASINDA YAPAY ZEKÂ PERFORMANSINA DİLİN ETKİSİ: CHATGPT-4.0 VE DEEPSEEK-R1 İLE TÜRKÇE VE İNGİLİZCE KARŞILAŞTIRMASI. The Journal of Kırıkkale University Faculty of Medicine, 28(1), 49-54. https://doi.org/10.24938/kutfd.1784362
AMA 1.Hato E, Peker K. PEDODONTİ SORULARININ YANITLANMASINDA YAPAY ZEKÂ PERFORMANSINA DİLİN ETKİSİ: CHATGPT-4.0 VE DEEPSEEK-R1 İLE TÜRKÇE VE İNGİLİZCE KARŞILAŞTIRMASI. Kırıkkale Üni Tıp Derg. 2026;28(1):49-54. doi:10.24938/kutfd.1784362
Chicago Hato, Esra, ve Koray Peker. 2026. “PEDODONTİ SORULARININ YANITLANMASINDA YAPAY ZEKÂ PERFORMANSINA DİLİN ETKİSİ: CHATGPT-4.0 VE DEEPSEEK-R1 İLE TÜRKÇE VE İNGİLİZCE KARŞILAŞTIRMASI”. The Journal of Kırıkkale University Faculty of Medicine 28 (1): 49-54. https://doi.org/10.24938/kutfd.1784362.
EndNote Hato E, Peker K (01 Nisan 2026) PEDODONTİ SORULARININ YANITLANMASINDA YAPAY ZEKÂ PERFORMANSINA DİLİN ETKİSİ: CHATGPT-4.0 VE DEEPSEEK-R1 İLE TÜRKÇE VE İNGİLİZCE KARŞILAŞTIRMASI. The Journal of Kırıkkale University Faculty of Medicine 28 1 49–54.
IEEE [1]E. Hato ve K. Peker, “PEDODONTİ SORULARININ YANITLANMASINDA YAPAY ZEKÂ PERFORMANSINA DİLİN ETKİSİ: CHATGPT-4.0 VE DEEPSEEK-R1 İLE TÜRKÇE VE İNGİLİZCE KARŞILAŞTIRMASI”, Kırıkkale Üni Tıp Derg, c. 28, sy 1, ss. 49–54, Nis. 2026, doi: 10.24938/kutfd.1784362.
ISNAD Hato, Esra - Peker, Koray. “PEDODONTİ SORULARININ YANITLANMASINDA YAPAY ZEKÂ PERFORMANSINA DİLİN ETKİSİ: CHATGPT-4.0 VE DEEPSEEK-R1 İLE TÜRKÇE VE İNGİLİZCE KARŞILAŞTIRMASI”. The Journal of Kırıkkale University Faculty of Medicine 28/1 (01 Nisan 2026): 49-54. https://doi.org/10.24938/kutfd.1784362.
JAMA 1.Hato E, Peker K. PEDODONTİ SORULARININ YANITLANMASINDA YAPAY ZEKÂ PERFORMANSINA DİLİN ETKİSİ: CHATGPT-4.0 VE DEEPSEEK-R1 İLE TÜRKÇE VE İNGİLİZCE KARŞILAŞTIRMASI. Kırıkkale Üni Tıp Derg. 2026;28:49–54.
MLA Hato, Esra, ve Koray Peker. “PEDODONTİ SORULARININ YANITLANMASINDA YAPAY ZEKÂ PERFORMANSINA DİLİN ETKİSİ: CHATGPT-4.0 VE DEEPSEEK-R1 İLE TÜRKÇE VE İNGİLİZCE KARŞILAŞTIRMASI”. The Journal of Kırıkkale University Faculty of Medicine, c. 28, sy 1, Nisan 2026, ss. 49-54, doi:10.24938/kutfd.1784362.
Vancouver 1.Esra Hato, Koray Peker. PEDODONTİ SORULARININ YANITLANMASINDA YAPAY ZEKÂ PERFORMANSINA DİLİN ETKİSİ: CHATGPT-4.0 VE DEEPSEEK-R1 İLE TÜRKÇE VE İNGİLİZCE KARŞILAŞTIRMASI. Kırıkkale Üni Tıp Derg. 01 Nisan 2026;28(1):49-54. doi:10.24938/kutfd.1784362

Bu Dergi, Kırıkkale Üniversitesi Tıp Fakültesi Yayınıdır.