Is ChatGPT a Sufficient and Readable Help Tool for the Most Frequently Asked Questions in General Dentistry?
Year 2025,
Volume: 52 Issue: 2, 97 - 102, 31.08.2025
Gözde Serindere
,
Ceren Aktuna Belgin
,
Kaan Gündüz
Abstract
Purpose: Artificial intelligence (AI)-enabled systems such as ChatGPT provide benefits in the field of dentistry in many areas such as patient education, counseling, appointment management and professional development. The correct and effective use of such technologies can improve the experience of both patients and dentists. The aim of this study was to determine the accuracy and readability of ChatGPT responses to common patient questions about general dentistry.
Materials and Methods: The most frequently asked questions by patients were collected from web-based tools. The ability to provide accurate and relevant information was determined subjectively by two observers using a 5-point Likert scale and objectively by comparing the responses with the Clinical Practice Guidelines and Dental Evidence published by the American Dental Association (ADA) and the literature. Readability was assessed using Simple Measure of Gobbledygook (SMOG), Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease Score (FRES).
Results: ChatGPT produced responses above the recommended level for the average patient (SMOG: 17.91; FRES: 43.98; FKGL: 10.29). The mean Likert score was 4.55, indicating that most responses were correct except for minor inaccuracies or missing information. FKGL and FRES readability scores correspond to a difficult reading level for patients seeking answers to general dental questions.
Conclusion: ChatGPT has the potential to be a helpful and decision supportive tool for patients. However, ChatGPT should not replace dentists, because incorrect and/or incomplete answers can negatively impact patient care.
Ethical Statement
The study was approved by the institutional board of Hatay Mustafa Kemal University Ethics Committee (Date: 08.01.2025; Decision No: 09).
Supporting Institution
None
References
-
Borkowski AA, Jakey CE, Mastorides SM, Kraus AL, Vidyarthi G, Viswanadhan N, et al. Applications of ChatGPT and Large Language Models in Medicine and Health Care: Benefits and Pitfalls. Fed Pract. 2023;40(6):170–173. doi:10.12788/fp.0386.
-
Eysenbach G. The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation With ChatGPT and a Call for Papers. JMIR Med Educ. 2023;9:e46885. doi:10.2196/46885.
-
Zhang F, Sun J. Multi-Feature Intelligent Oral English Error Correction Based on Few-Shot Learning Technology. Comput Intell Neurosci. 2022;2022:2501693. doi:10.1155/2022/2501693.
-
Cruz JA, Wishart DS. Applications of machine learning in cancer prediction and prognosis. Cancer Inform. 2007;2:59–77.
-
Dergaa I, Chamari K, Zmijewski P, Ben Saad H. From human writing to artificial intelligence generated text: examining the prospects and potential threats of ChatGPT in academic writing. Biol Sport. 2023;40(2):615–622. doi:10.5114/biolsport.2023.125623.
-
Pan A, Musheyev D, Bockelman D, Loeb S, Kabarriti AE. Assessment of Artificial Intelligence Chatbot Responses to Top Searched Queries About Cancer. JAMA Oncol. 2023;9(10):1437–1440. doi:10.1001/jamaoncol.2023.2947.
-
Jacobs T, Shaari A, Gazonas CB, Ziccardi VB. Is ChatGPT an Accurate and Readable Patient Aid for Third Molar Extractions? J Oral Maxillofac Surg. 2024;82(10):1239–1245. doi:10.1016/j.joms.2024.06.177.
-
Kula B, Kula A, Bağcıer F, Alyanak B. Reliability and usefulness of ChatGPT in temporomandibular joint disorders. Int Dent J. 2024;74:S3–S4. doi:10.1016/j.identj.2024.07.579.
-
Freire Y, Santamaría Laorden A, Orejas Pérez J, Gómez Sánchez M, Díaz-Flores García V, Suárez A. ChatGPT performance in prosthodontics: Assessment of accuracy and repeatability in answer generation. J Prosthet Dent. 2024;131(4):659.e1–659.e6. doi:10.1016/j.prosdent.2024.01.018.
-
Association AD. Clinical Practice Guidelines and Dental Evidence [Web Page]; Available from: https://www.ada.org/resources/research/science-and-research-institute/evidence-based-dental-research
.
-
Mc Laughlin GH. SMOG grading-a new readability formula. Journal of Reading. 1969;12(8):639–646.
-
Flesch R. A new readability yardstick. J Appl Psychol. 1948;32(3):221–233. doi:10.1037/h0057532.
-
Koo TK, Li MY. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research. J Chiropr Med. 2016;15(2):155–163. doi:10.1016/j.jcm.2016.02.012.
-
Balel Y. Can ChatGPT be used in oral and maxillofacial surgery? J Stomatol Oral Maxillofac Surg. 2023;124(5):101471. doi:10.1016/j.jormas.2023.101471.
-
Fuchs A, Trachsel T, Weiger R, Eggmann F. ChatGPT’s performance in dentistry and allergy-immunology assessments: a comparative study. Swiss Dent J. 2023;134(2):1–17. doi:10.61872/sdj-2024-06-01.
-
Tiwari A, Kumar A, Jain S, Dhull KS, Sajjanar A, Puthenkandathil R, et al. Implications of ChatGPT in Public Health Dentistry: A Systematic Review. Cureus. 2023;15(6):e40367. doi:10.7759/cureus.40367.
-
Giannakopoulos K, Kavadella A, Aaqel Salim A, Stamatopoulos V, Kaklamanos EG. Evaluation of the Performance of Generative AI Large Language Models ChatGPT, Google Bard, and Microsoft Bing Chat in Supporting Evidence-Based Dentistry: Comparative Mixed Methods Study. J Med Internet Res. 2023;25:e51580. doi:10.2196/51580.