Research Article
BibTex RIS Cite

Is ChatGPT's Knowledge on Rhinology Accurate? Can It Be Utilized in Medical Education and Patient Information?

Year 2026, Volume: 9 Issue: 1, 21 - 26, 26.03.2026
https://doi.org/10.65396/ejra.1859529
https://izlik.org/JA82UR45SA

Abstract

ABSTRACT

Objective:
The ChatGPT is a new artificial intelligence model designed to create human-like chats. As a result of advancing knowledge and technological improvements, it is promising in the field of medicine, especially as a resource for patients and clinicians. The aim of our study was to measure the accuracy and consistency of ChatGPT answers to questions in the field of rhinology.
Methods:
In March 2024, ChatGPT (ChatGPT version 4) was presented with 130 rhinology questions. Each question was asked to ChatGPT twice, and the consistency and reproducibility of the answers were investigated. The answers were evaluated by three ENT physicians.
Results:
The answers given by the ChatGPT were consistent at a rate of 91.5%(119/130). Among the inconsistent answers, the second answer was found to be more correct in out of 10/11. Statistically, the second answer was more correct (p:0011). In 130 questions, as a result of the controller evaluation, the number of answers evaluated as completely correct was 99/81/80(76.2%, 62.3%, and 61.5%, respectively). However, completely incorrect answers were observed in 5.4%, 4.6%, and 5.4%, respectively. Accordingly, there was no statistically significant difference between the controllers(p:0.270).
Conclusion:
The inaccuracy of ChatGPT in patient information and education processes is considered acceptable and reliable. However, it is also seen that ChatGPT answers are not completely correct and can provide misleading answers to some questions. We believe that it would be safer and more accurate to use ChatGPT as an informative and educational tool for patients with the control of experts.

References

  • 1. Biswas SS. Role of ChatGPT in Public Health. Ann Biomed Eng 2023; 51:868–869
  • 2. Munoz-Zuluaga C, Zhao Z, Wang F, et al. Assessing the accuracy and clinical utility of ChatGPT in laboratory medicine. Clin Chem 2023; 69:939–40.
  • 3. The Lancet Digital H. ChatGPT: friend or foe? Lancet Digit Health (2023) 5(3): e102. doi:10.1016/S2589-7500(23)00023-7
  • 4. van Dis EAM, Bollen J, Zuidema W, et al. ChatGPT: five priorities for research. Nature 2023; 614:224–6.
  • 5-OpenAI. ChatGPT. 2023. https:// openai.com/blog/chatgpt (Accessed March 28, 2023).
  • 6. Exploding Topics. Number of ChatGPT users 2023. https://explodingtopics.com/blog/ chatgpt-users (Accessed March 30, 2023).
  • 7. Else H. Abstracts written by ChatGPT fool scientists. Nature 2023; 613:423.
  • 8. Haupt CE, Marks M. AI-generated medical advice-GPT and beyond. JAMA 2023; 329:1349–50.
  • 9. Jeblick K, Schachtner B, Dexl J, et al. ChatGPT makes medicine easy to swallow: an exploratory case study on simplified radiology reports. Eur Radiol 2024;34:2817-2825.
  • 10. Allahqoli L, Ghiasvand MM, Mazidimoradi A, et al. The diagnostic and management performance of the chatGPT in obstetrics and gynecology. Gynecol Obstet Invest 2023; 88:310–3.
  • 11. Samaan JS, Yeo YH, Rajeev N, et al. Assessing the accuracy of responses by the language model chatGPT to questions regarding bariatric surgery. Obes Surg 2023;33:1790–6.
  • 12. Yeo YH, Samaan JS, Ng WH, et al. Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. Clin Mol Hepatol. 2023;29:721-732.
  • 13. Dave T, Athaluri SA, Singh S. ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations. Front Artif Intell. 2023;6:1169595.
  • 14. De Angelis L, Baglivo F, Arzilli G, et al. ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. Front Public Health 2023;11:1166120
  • 15. Kuscu O, Pamuk AE, Sütay Süslü N, et al. Is ChatGPT accurate and reliable in answering questions regarding head and neck cancer? Front. Oncol. 2023;13:1256459.
  • 16. Park I, Joshi AS, Javan R. Potential role of ChatGPT in clinical otolaryngology explained by ChatGPT. Am J Otolaryngol 2023; 44:103873.
  • 17. Ayoub NF, Lee YJ, Grimm D, et al. Comparison between chatGPT and google search as sources of postoperative patient instructions. JAMA Otolaryngol Head Neck Surg 2023; 149:556–8.
  • 18. Chiesa-Estomba CM, Lechien JR, Vaira LA, et al. Exploring the potential of Chat-GPT as a supportive tool for sialendoscopy clinical decision making and patient information support. Eur Arch Otorhinolaryngol 2024;281:2777
  • 19. Hoch CC, Wollenberg B, Luers JC, et al. ChatGPT's quiz skills in different otolaryngology subspecialties: an analysis of 2576 single-choice and multiple-choice board certification preparation questions. Eur Arch Otorhinolaryngol 2023;280:4271–8.
  • 20. Riestra-Ayora J, Vaduva C, Esteban-Sánchez J, et al. ChatGPT as an information tool in rhinology. Can we trust each other today? Eur Arch Otorhinolaryngol. 2024;281:3253-3259.
  • 21. Maniaci A, Saibene AM, Calvo-Henriquez C, et al. Is generative pre-trained transformer artificial intelligence (Chat-GPT) a reliable tool for guidelines synthesis? A preliminary evaluation for biologic CRSwNP therapy. Eur Arch Otorhinolaryngol. 2024;281:2167-2173.
  • 22. Patel EA, Fleischer L, Filip P, et al. Comparative Performance of ChatGPT 3.5 and GPT4 on Rhinology Standardized Board Examination Questions. OTO Open. 2024;8:e164.
  • 23. Radulesco T, Saibene AM, Michel J, et al. ChatGPT-4 performance in rhinology: A clinical case series. Int Forum Allergy Rhinol. 2024;14:1123-1130.
  • 24. Workman AD, Rathi VK, Lerner DK, et al. Utility of a LangChain and OpenAI GPT-powered chatbot based on the international consensus statement on allergy and rhinology: Rhinosinusitis. Int Forum Allergy Rhinol. 2024;14:1101-1109.
  • 25-Ye F, Zhang H, Luo X, et al. Evaluating ChatGPT's Performance in Answering Questions About Allergic Rhinitis and Chronic Rhinosinusitis. Otolaryngol Head Neck Surg. 2024;171:571-577.
  • 26-Bellinger JR, Kwak MW, Ramos GA, et al. Quantitative Comparison of Chatbots on Common Rhinology Pathologies. Laryngoscope. 2024;134(10):4225-4231.
There are 26 citations in total.

Details

Primary Language English
Subjects Otorhinolaryngology
Journal Section Research Article
Authors

Aykut Özdoğan 0009-0001-6460-3383

Burçay Tellioğlu 0000-0003-2473-7085

Oğuzhan Katar 0000-0001-5485-7948

Serdar Özer

Submission Date January 12, 2026
Acceptance Date February 5, 2026
Publication Date March 26, 2026
DOI https://doi.org/10.65396/ejra.1859529
IZ https://izlik.org/JA82UR45SA
Published in Issue Year 2026 Volume: 9 Issue: 1

Cite

APA Özdoğan, A., Tellioğlu, B., Katar, O., & Özer, S. (2026). Is ChatGPT’s Knowledge on Rhinology Accurate? Can It Be Utilized in Medical Education and Patient Information? European Journal of Rhinology and Allergy, 9(1), 21-26. https://doi.org/10.65396/ejra.1859529
AMA 1.Özdoğan A, Tellioğlu B, Katar O, Özer S. Is ChatGPT’s Knowledge on Rhinology Accurate? Can It Be Utilized in Medical Education and Patient Information? Eur J Rhinol Allergy. 2026;9(1):21-26. doi:10.65396/ejra.1859529
Chicago Özdoğan, Aykut, Burçay Tellioğlu, Oğuzhan Katar, and Serdar Özer. 2026. “Is ChatGPT’s Knowledge on Rhinology Accurate? Can It Be Utilized in Medical Education and Patient Information?”. European Journal of Rhinology and Allergy 9 (1): 21-26. https://doi.org/10.65396/ejra.1859529.
EndNote Özdoğan A, Tellioğlu B, Katar O, Özer S (March 1, 2026) Is ChatGPT’s Knowledge on Rhinology Accurate? Can It Be Utilized in Medical Education and Patient Information? European Journal of Rhinology and Allergy 9 1 21–26.
IEEE [1]A. Özdoğan, B. Tellioğlu, O. Katar, and S. Özer, “Is ChatGPT’s Knowledge on Rhinology Accurate? Can It Be Utilized in Medical Education and Patient Information?”, Eur J Rhinol Allergy, vol. 9, no. 1, pp. 21–26, Mar. 2026, doi: 10.65396/ejra.1859529.
ISNAD Özdoğan, Aykut - Tellioğlu, Burçay - Katar, Oğuzhan - Özer, Serdar. “Is ChatGPT’s Knowledge on Rhinology Accurate? Can It Be Utilized in Medical Education and Patient Information?”. European Journal of Rhinology and Allergy 9/1 (March 1, 2026): 21-26. https://doi.org/10.65396/ejra.1859529.
JAMA 1.Özdoğan A, Tellioğlu B, Katar O, Özer S. Is ChatGPT’s Knowledge on Rhinology Accurate? Can It Be Utilized in Medical Education and Patient Information? Eur J Rhinol Allergy. 2026;9:21–26.
MLA Özdoğan, Aykut, et al. “Is ChatGPT’s Knowledge on Rhinology Accurate? Can It Be Utilized in Medical Education and Patient Information?”. European Journal of Rhinology and Allergy, vol. 9, no. 1, Mar. 2026, pp. 21-26, doi:10.65396/ejra.1859529.
Vancouver 1.Aykut Özdoğan, Burçay Tellioğlu, Oğuzhan Katar, Serdar Özer. Is ChatGPT’s Knowledge on Rhinology Accurate? Can It Be Utilized in Medical Education and Patient Information? Eur J Rhinol Allergy. 2026 Mar. 1;9(1):21-6. doi:10.65396/ejra.1859529

You can find the current version of the Instructions to Authors at: https://www.eurjrhinol.org/en/instructions-to-authors-104

Starting on 2020, all content published in the journal is licensed under the Creative Commons Attribution-NonCommercial (CC BY-NC) 4.0 International
License which allows third parties to use the content for non-commercial purposes as long as they give credit to the original work. This license
allows for the content to be shared and adapted for non-commercial purposes, promoting the dissemination and use of the research published in
the journal.
The content published before 2020 was licensed under a traditional copyright, but the archive is still available for free access.