Araştırma Makalesi

Comparison of the Success of Chatgpt 4.0 and Google Gemini in Anatomy Questions Asked in Türkiye National Medical Specialization Exams

Cilt: 24 Sayı: 74 22 Aralık 2025
PDF İndir
EN TR

Comparison of the Success of Chatgpt 4.0 and Google Gemini in Anatomy Questions Asked in Türkiye National Medical Specialization Exams

Öz

Objective: The scientific validity of utilizing artificial intelligence (AI)-based tools for studying anatomy and preparing for medical specialization exams has increasingly become a subject of academic interest. This study aimed to evaluate the performance of ChatGPT 4.0 and Google Gemini in answering anatomy questions from the Türkiye National Medical Specialization Examination. Materials and Methods: Anatomy-related questions were extracted from exams administered biannually between 2006 and 2021, which were publicly available through the institutional website. Out of 400 questions, 384 were deemed suitable and were simultaneously posed to both AI models. Results: The overall accuracy was 80.7% for ChatGPT 4.0 and 69.3% for Gemini (p < 0.001). ChatGPT 4.0 demonstrated a significantly higher success rate in questions requiring clinical reasoning and inference (91.1%) compared to Gemini (71.4%) (p = 0.007). Conclusion: ChatGPT 4.0 outperformed Gemini in terms of accuracy and reliability, particularly for clinically oriented anatomy questions. While AI models such as ChatGPT show promise in anatomy education and exam preparation, it is advisable to use them in conjunction with validated academic resources.

Anahtar Kelimeler

Kaynakça

  1. 1. Meroueh C, Chen ZE. Artificial intelligence in anatomical pathology: building a strong foundation for precision medicine. Hum Pathol. 2023;132:31-8.
  2. 2. Mogali SR. Initial impressions of ChatGPT for anatomy education. Anatomical sciences education. 2024;17(3):444-7.
  3. 3. Pirkle S, Yang J, Blumberg TJ. Do ChatGPT and Gemini Provide Appropriate Recommendations for Pediatric Orthopaedic Conditions? J Pediatr Orthop. 2024;44(8):e123-9.
  4. 4. Peters M, Leclercq M, Yanni A, Eynden XV, Martin L, Haute NV, et al. ChatGPT and Trainee performances in the management of maxillofacial patients. J Stomatol Oral Maxillofac Surg. 2024;125(4):102090.
  5. 5. Al-Sharif EM, Penteado RC, Dib El Jalbout N, Topilow NJ, Shoji MK, Kikkawa DO, et al. Evaluating the Accuracy of ChatGPT and Google BARD in Fielding Oculoplastic Patient Queries: A Comparative Study on Artificial versus Human Intelligence. Ophthal Plast Reconstr Surg. 2024;40(3):303-11.
  6. 6. Mayo-Yáñez M, Lechien JR, Maria-Saibene A, Vaira LA, Maniaci A, Chiesa-Estomba CM. Examining the Performance of ChatGPT 3.5 and Microsoft Copilot in Otolaryngology: A Comparative Study with Otolaryngologists' Evaluation. Indian J Otolaryngol Head Neck Surg. 2024;76(4):3465-9.
  7. 7. Meral G, Ateş S, Günay S, Öztürk A, Kuşdoğan M. Comparative analysis of ChatGPT, Gemini and emergency medicine specialist in ESI triage assessment. Am J Emerg Med. 2024;81:146-50.
  8. 8. Qin S, Chislett B, Ischia J, Ranasinghe W, de Silva D, Coles-Black J, et al. ChatGPT and generative AI in urology and surgery-A narrative review. BJUI Compass. 2024;5(9):813-21.

Ayrıntılar

Birincil Dil

İngilizce

Konular

Tıp Eğitimi

Bölüm

Araştırma Makalesi

Yayımlanma Tarihi

22 Aralık 2025

Gönderilme Tarihi

10 Haziran 2025

Kabul Tarihi

3 Kasım 2025

Yayımlandığı Sayı

Yıl 2025 Cilt: 24 Sayı: 74

Kaynak Göster

APA
Keskin, A., & Aygün, T. (2025). Comparison of the Success of Chatgpt 4.0 and Google Gemini in Anatomy Questions Asked in Türkiye National Medical Specialization Exams. Tıp Eğitimi Dünyası, 24(74), 127-134. https://doi.org/10.25282/ted.1716591
AMA
1.Keskin A, Aygün T. Comparison of the Success of Chatgpt 4.0 and Google Gemini in Anatomy Questions Asked in Türkiye National Medical Specialization Exams. TED. 2025;24(74):127-134. doi:10.25282/ted.1716591
Chicago
Keskin, Arif, ve Tayfun Aygün. 2025. “Comparison of the Success of Chatgpt 4.0 and Google Gemini in Anatomy Questions Asked in Türkiye National Medical Specialization Exams”. Tıp Eğitimi Dünyası 24 (74): 127-34. https://doi.org/10.25282/ted.1716591.
EndNote
Keskin A, Aygün T (01 Aralık 2025) Comparison of the Success of Chatgpt 4.0 and Google Gemini in Anatomy Questions Asked in Türkiye National Medical Specialization Exams. Tıp Eğitimi Dünyası 24 74 127–134.
IEEE
[1]A. Keskin ve T. Aygün, “Comparison of the Success of Chatgpt 4.0 and Google Gemini in Anatomy Questions Asked in Türkiye National Medical Specialization Exams”, TED, c. 24, sy 74, ss. 127–134, Ara. 2025, doi: 10.25282/ted.1716591.
ISNAD
Keskin, Arif - Aygün, Tayfun. “Comparison of the Success of Chatgpt 4.0 and Google Gemini in Anatomy Questions Asked in Türkiye National Medical Specialization Exams”. Tıp Eğitimi Dünyası 24/74 (01 Aralık 2025): 127-134. https://doi.org/10.25282/ted.1716591.
JAMA
1.Keskin A, Aygün T. Comparison of the Success of Chatgpt 4.0 and Google Gemini in Anatomy Questions Asked in Türkiye National Medical Specialization Exams. TED. 2025;24:127–134.
MLA
Keskin, Arif, ve Tayfun Aygün. “Comparison of the Success of Chatgpt 4.0 and Google Gemini in Anatomy Questions Asked in Türkiye National Medical Specialization Exams”. Tıp Eğitimi Dünyası, c. 24, sy 74, Aralık 2025, ss. 127-34, doi:10.25282/ted.1716591.
Vancouver
1.Arif Keskin, Tayfun Aygün. Comparison of the Success of Chatgpt 4.0 and Google Gemini in Anatomy Questions Asked in Türkiye National Medical Specialization Exams. TED. 01 Aralık 2025;24(74):127-34. doi:10.25282/ted.1716591