Araştırma Makalesi
BibTex RIS Kaynak Göster

Evaluation of Responses Given by Artificial Intelligence Chatbots to Questions on the ‘Fluoride in Dentistry’

Yıl 2025, Cilt: 15 Sayı: 3, 1069 - 1078, 30.09.2025

Öz

Aim: This study aimed to evaluate the accuracy, completeness, reliability-quality and readability of the information provided by different artificial intelligence chatbots to frequently asked questions on the topic of ‘fluoride in dentistry’.
Methods: 15 frequently asked open-ended questions about fluoride applications in dentistry were directed to ChatGPT 4.0, ChatGPT 4.10, Copilot, Gemini, and Perplexity platforms. All questions were asked only once, from the same computer IP address and using the same fixed fiber internet network. Based on current literature and clinical practices, all responses were evaluated by a consensus of three expert pediatric dentists with at least ten years of clinical experience. Likert Scales were used for accuracy and completeness, the mDISCERN scale for quality and reliability, and the Ateşman Readability Index for readability. Fisher’s Exact, Kruskal-Wallis, Dunn, and Mann-Whitney U Test were used for statistical analysis. Results: According to the median values of Likert Accuracy and Completeness Scales, Gemini> Perplexity> ChatGPT 4.10> ChatGPT 4.0> Copilot (p=0.016), (p=0.002). The ranking according to mDISCERN index: Perplexity Gemini> ChatGPT 4.10> ChatGPT 4.0> Copilot (p=0.008). While the answers given by all AI chat engines had an average readability of medium difficulty, there was a statistically significant difference between Gemini and Perplexity (p=0.010).
Conclusion: Although the answers on the subject of ‘Fluoride in dentistry’ were accurate primarily and valid for all AI chatbots, the datasets used for their training should benefit from sources containing quality and the most up-to-date scientific guidelines, and easier readability should be provided. More research and development are needed on the subject.

Proje Numarası

-

Kaynakça

  • 1. Swire-Thompson B, Lazer D. Public Health and Online Misinformation: Challenges and Recommendations. Annu Rev Public Health. 2020; 4:433-51.
  • 2. Van Bulck L, Moons P. What if your patient switches from Dr. Google to Dr. ChatGPT? A vignette-based survey of the trustworthiness, value, and danger of ChatGPT-generated responses to health questions. Eur J Cardiovasc Nurs. 2024;23(1):95-8.
  • 3. Benke K, Benke G. Artificial Intelligence and Big Data in Public Health. Int J Environ Res Public Health. 2018;15(12).
  • 4. Thurzo A, et al. Where Is the Artificial Intelligence Applied in Dentistry? Systematic Review and Literature Analysis. Healthcare (Basel). 2022;10(7).
  • 5. Acar AH. Can natural language processing serve as a consultant in oral surgery? J Stomatol Oral Maxillofac Surg. 2024;125(3):101724.
  • 6. Chakraborty CS, et al. Overview of Chatbots with special emphasis on artificial intelligence-enabled ChatGPT in medical science. Front Artif Intell. 2023; 6:1237704.
  • 7. Dursun D, Bilici Gecer R. Can artificial intelligence models serve as patient information consultants in orthodontics? BMC Med Inform Decis Mak. 2024;24(1):211.
  • 8. Moons P, Van Bulck L. ChatGPT: can artificial intelligence language models be of value for cardiovascular nurses and allied health professionals. Eur J Cardiovasc Nurs. 2023;22(7): e55-9.
  • 9. Cascella M, et al. The Breakthrough of Large Language Models Release for Medical Applications: 1-Year Timeline and Perspectives. J Med Syst. 2024;48(1):22.
  • 10. Daraqel B, et al. The performance of artificial intelligence models in generating responses to general orthodontic questions: ChatGPT vs Google Bard. Am J Orthod Dentofacial Orthop. 2024;165(6):652-62.
  • 11. Makrygiannakis MA, Giannakopoulos K, Kaklamanos EG. Evidence-based potential of generative artificial intelligence large language models in orthodontics: a comparative study of ChatGPT, Google Bard, and Microsoft Bing. Eur J Orthod. 2024;46.
  • 12. Gravina AG, et al. Charting new AI education in gastroenterology: Cross-sectional evaluation of ChatGPT and perplexity AI in medical residency exam. Dig Liver Dis. 2024;56(8):1304-11. 13. Mustuloglu S, Deniz BP. Evaluation of Chatbots in the Emergency Management of Avulsion Injuries. Dent Traumatol. 2025; 0:1-8. 14. Abu Arqub S, et al. Content analysis of AI-generated (ChatGPT) responses concerning orthodontic clear aligners. Angle Orthod. 2024;94(3):263-72.
  • 15. Chen S, Kann BH, Foote MB, et al. Use of Artificial Intelligence Chatbots for Cancer Treatment Information. JAMA Oncol. 2023;9(10):1459-62.
  • 16. Suarez A, et al. Unveiling the ChatGPT phenomenon: Evaluating the consistency and accuracy of endodontic question answers. Int Endod J, 2024;57(1):108-13.
  • 17. Veneri F, Vinceti SR, Filippini T. Fluoride and caries prevention: a scoping review of public health policies. Ann Ig. 2024;36(3):270-80.
  • 18. Manchanda S, et al. Topical fluoride to prevent early childhood caries: Systematic review with network meta-analysis. J Dent. 2022; 116:103885.
  • 19. Chi DL. Parent Refusal of Topical Fluoride for Their Children: Clinical Strategies and Future Research Priorities to Improve Evidence-Based Pediatric Dental Practice. Dent Clin North Am. 2017;61(3):607-17.
  • 20. Chi DL, et al. A conceptual model on caregivers' hesitancy of topical fluoride for their children. PLoS One. 2023;18(3):e0282834.
  • 21. Lyons RJ, et al. Artificial intelligence chatbot performance in triage of ophthalmic conditions. Can J Ophthalmol. 2024;59(4): e301-8.
  • 22. Kilinc DD, Mansiz D. Examination of the reliability and readability of Chatbot Generative Pretrained Transformer's (ChatGPT) responses to questions about orthodontics and the evolution of these responses in an updated version. Am J Orthod Dentofacial Orthop. 2024;165(5):546-55.
  • 23. Varli B, et al. Evaluation of readability levels of online patient education materials for female pelvic floor disorders. Medicine (Baltimore). 2023;102(52):e36636.
  • 24. Ghanem YK, et al. Dr. Google to Dr. ChatGPT: assessing the content and quality of artificial intelligence-generated medical information on appendicitis. Surg Endosc. 2024;38(5):2887-93.
  • 25. Arif TB, Munaf U, Ul-Haque I. The future of medical education and research: Is ChatGPT a blessing or blight in disguise? Med Educ Online. 2023;28(1):2181052.
  • 26. Balel Y. Can ChatGPT be used in oral and maxillofacial surgery? J Stomatol Oral Maxillofac Surg. 2023;124(5):101471.
  • 27. Buldur M, Sezer B. Can Artificial Intelligence Effectively Respond to Frequently Asked Questions About Fluoride Usage and Effects? A Qualitative Study on Chatgpt. Fluoride. 2023;56(3):201-16.
  • 28. Amadeu de Oliveira F, et al. The effect of fluoride on the structure, function, and proteome of intestinal epithelia. Environ Toxico. 2018;33(1):63-71.
  • 29. Melo CGS, et al. Enteric innervation combined with proteomics for the evaluation of the effects of chronic fluoride exposure on the duodenum of rats. Sci Rep. 2017;7(1):1070.
  • 30. Ullah R, Zafar MS, Shahani N. Potential fluoride toxicity from oral medicaments: A review. Iran J Basic Med Sci. 2017;20(8):841-8.
  • 31. Do LG, et al. Early Childhood Exposures to Fluorides and Child Behavioral Development and Executive Function: A Population-Based Longitudinal Study. J Dent Res. 2023;102(1):28-36.
  • 32. Johnston NR, Strobel SA. Principles of fluoride toxicity and the cellular response: a review. Arch Toxicol. 2020;94(4):1051-69.
  • 33. Basch CH, Milano N, Hillyer GC. An assessment of fluoride related posts on Instagram. Health Promot Perspect. 2019;9(1):85-8.
  • 34. Carpiano RM, Chi DL. Parents' attitudes towards topical fluoride and vaccines for children: Are these distinct or overlapping phenomena? Prev Med Rep. 2018; 10:123-8.
  • 35. Association AD Fluoridation FAQs; https://www.ada.org/resources/community-initiatives/fluoride-in-water/fluoridation-faqs
  • 36. Ghahramani Z. Probabilistic machine learning and artificial intelligence. Nature. 2015;521(7553):452-9.
  • 37. Johnson AJ, et al. Evaluation of validity and reliability of AI Chatbots as public sources of information on dental trauma. Dent Traumatol. 2025;41(2):187-93.
  • 38. Alan R, Alan BM. Utilizing ChatGPT-4 for Providing Information on Periodontal Disease to Patients: A DISCERN Quality Analysis. Cureus. 2023;15(9):e46213.
  • 39. Bhattacharyya M, et al. High Rates of Fabricated and Inaccurate References in ChatGPT-Generated Medical Content. Cureus. 2023;15(5): e39238.
  • 40. McInnes N, Haglund BJ. Readability of online health information: implications for health literacy. Inform Health Soc Care. 2011;36(4):173-89.
  • 41. Onder CE, et al. Evaluation of the reliability and readability of ChatGPT-4 responses regarding hypothyroidism during pregnancy. Sci Rep. 2024;14(1):243.
  • 42. Seth I, et al. Comparing the Efficacy of Large Language Models ChatGPT, BARD, and Bing AI in Providing Information on Rhinoplasty: An Observational Study. Aesthet Surg J Open Forum. 2023;5: ojad084.
  • 43. Delikan E, Ertürk-Avunduk T, Aksu S. Approaches of general and specialist dentists to deep caries management: a cross-sectional study from Turkey. J Dent Indonesia. 2021;28(2):94–104.
  • 44. Mishra A, et al. Knowledge, Awareness, Attitudes of Radiation Hazards and Protection Among Dentist: A Questionnaire Survey. J Pharm Bioallied Sci. 2025;17(1):281-3.
  • 45. Dămăşaru MS, et al. Implications of type 1 diabetes mellitus in the etiology and clinic of dento-maxillary anomalies-questionnaire-based evaluation of the dentists’ opinion. Med Pharm Rep. 2025;98(1):135.

Yapay zeka sohbet robotlarının ‘diş hekimliğinde flor’ konusu ile ilgili sorulara verdikleri yanıtların değerlendirilmesi

Yıl 2025, Cilt: 15 Sayı: 3, 1069 - 1078, 30.09.2025

Öz

Amaç: Bu çalışmanın amacı, ‘diş hekimliğindeki flor’ konusuyla ilgili sıkça sorulan sorulara farklı yapay zeka sohbet robotları tarafından sağlanan bilgilerin doğruluk, eksiksizlik, güvenilirlik-kalite ve okunabilirlik yönünden değerlendirilmesidir.
Yöntem: Diş hekimliğinde flor uygulamaları hakkında sık sorulan açık uçlu 15 soru ChatGPT 4.0, ChatGPT 4.10, Copilot, Gemini ve Perplexity platformlarına yöneltildi. Tüm sorular yalnızca bir kez, aynı bilgisayar IP adresinden ve aynı sabit fiber internet ağı kullanılarak soruldu. Doğruluk ve eksiksizlik için Likert Ölçekleri, kalite ve güvenilirlik için mDISCERN ölçeği ve okunabilirlik için Ateşman Okunabilirlik İndeksi değerlendirme ölçütü olarak kullanıldı. İstatistiksel analizde Fisher’s Exact Testi, Kruskal Wallis Testi, Dunn Testi ve Mann Whitney U Testi kullanıldı.
Bulgular: Likert Doğruluk ve Eksiksizlik Ölçekleri ortanca değerlerine göre; Gemini> Perplexity> ChatGPT 4.10> ChatGPT 4.0> Copilot şeklindedir (p=0.016), (p=0.002). mDISCERN indeksine göre sıralama; Perplexity= Gemini> ChatGPT 4.10> ChatGPT 4.0> Copilot şeklindedir (p=0.008). Tüm yapay zeka sohbet motorlarının verdiği cevaplar ortalama orta zorlukta okunabilirlikte iken; Gemini ile Perplexity arasında istatistiksel anlamlı farklılık vardır (p=0.010).
Sonuç: ‘Diş hekimliğinde flor’ konusuyla ilgili cevapları tüm yapay zeka sohbet robotları için çoğunlukla doğru ve geçerli bilgiler olmakla birlikte eğitilmeleri için kullanılan veri kümelerinin kaliteli ve en güncel bilimsel yönergeleri içeren kaynaklardan yararlanması ve daha kolay okunabilirliğin sağlanması gerekmektedir. Konuyla ilgili daha fazla araştırma ve geliştirmeye ihtiyaç vardır.

Etik Beyan

Mersin Üniversitesi Girişimsel Olmayan Klinik Araştırmalar Etik Kurul Başkanlığı tarafından etik kurul onayı alınmıştır. (Karar Sayısı:779, 09.07.25)

Destekleyen Kurum

-

Proje Numarası

-

Teşekkür

Mersin Üniversitesi Çocuk Diş Hekimliği Anabilim Dalı’nın değerli öğretim üyelerine araştırmanın geliştirilmesi ve değerlendirilmesi sürecindeki kıymetli destekleri için teşekkür ederiz.

Kaynakça

  • 1. Swire-Thompson B, Lazer D. Public Health and Online Misinformation: Challenges and Recommendations. Annu Rev Public Health. 2020; 4:433-51.
  • 2. Van Bulck L, Moons P. What if your patient switches from Dr. Google to Dr. ChatGPT? A vignette-based survey of the trustworthiness, value, and danger of ChatGPT-generated responses to health questions. Eur J Cardiovasc Nurs. 2024;23(1):95-8.
  • 3. Benke K, Benke G. Artificial Intelligence and Big Data in Public Health. Int J Environ Res Public Health. 2018;15(12).
  • 4. Thurzo A, et al. Where Is the Artificial Intelligence Applied in Dentistry? Systematic Review and Literature Analysis. Healthcare (Basel). 2022;10(7).
  • 5. Acar AH. Can natural language processing serve as a consultant in oral surgery? J Stomatol Oral Maxillofac Surg. 2024;125(3):101724.
  • 6. Chakraborty CS, et al. Overview of Chatbots with special emphasis on artificial intelligence-enabled ChatGPT in medical science. Front Artif Intell. 2023; 6:1237704.
  • 7. Dursun D, Bilici Gecer R. Can artificial intelligence models serve as patient information consultants in orthodontics? BMC Med Inform Decis Mak. 2024;24(1):211.
  • 8. Moons P, Van Bulck L. ChatGPT: can artificial intelligence language models be of value for cardiovascular nurses and allied health professionals. Eur J Cardiovasc Nurs. 2023;22(7): e55-9.
  • 9. Cascella M, et al. The Breakthrough of Large Language Models Release for Medical Applications: 1-Year Timeline and Perspectives. J Med Syst. 2024;48(1):22.
  • 10. Daraqel B, et al. The performance of artificial intelligence models in generating responses to general orthodontic questions: ChatGPT vs Google Bard. Am J Orthod Dentofacial Orthop. 2024;165(6):652-62.
  • 11. Makrygiannakis MA, Giannakopoulos K, Kaklamanos EG. Evidence-based potential of generative artificial intelligence large language models in orthodontics: a comparative study of ChatGPT, Google Bard, and Microsoft Bing. Eur J Orthod. 2024;46.
  • 12. Gravina AG, et al. Charting new AI education in gastroenterology: Cross-sectional evaluation of ChatGPT and perplexity AI in medical residency exam. Dig Liver Dis. 2024;56(8):1304-11. 13. Mustuloglu S, Deniz BP. Evaluation of Chatbots in the Emergency Management of Avulsion Injuries. Dent Traumatol. 2025; 0:1-8. 14. Abu Arqub S, et al. Content analysis of AI-generated (ChatGPT) responses concerning orthodontic clear aligners. Angle Orthod. 2024;94(3):263-72.
  • 15. Chen S, Kann BH, Foote MB, et al. Use of Artificial Intelligence Chatbots for Cancer Treatment Information. JAMA Oncol. 2023;9(10):1459-62.
  • 16. Suarez A, et al. Unveiling the ChatGPT phenomenon: Evaluating the consistency and accuracy of endodontic question answers. Int Endod J, 2024;57(1):108-13.
  • 17. Veneri F, Vinceti SR, Filippini T. Fluoride and caries prevention: a scoping review of public health policies. Ann Ig. 2024;36(3):270-80.
  • 18. Manchanda S, et al. Topical fluoride to prevent early childhood caries: Systematic review with network meta-analysis. J Dent. 2022; 116:103885.
  • 19. Chi DL. Parent Refusal of Topical Fluoride for Their Children: Clinical Strategies and Future Research Priorities to Improve Evidence-Based Pediatric Dental Practice. Dent Clin North Am. 2017;61(3):607-17.
  • 20. Chi DL, et al. A conceptual model on caregivers' hesitancy of topical fluoride for their children. PLoS One. 2023;18(3):e0282834.
  • 21. Lyons RJ, et al. Artificial intelligence chatbot performance in triage of ophthalmic conditions. Can J Ophthalmol. 2024;59(4): e301-8.
  • 22. Kilinc DD, Mansiz D. Examination of the reliability and readability of Chatbot Generative Pretrained Transformer's (ChatGPT) responses to questions about orthodontics and the evolution of these responses in an updated version. Am J Orthod Dentofacial Orthop. 2024;165(5):546-55.
  • 23. Varli B, et al. Evaluation of readability levels of online patient education materials for female pelvic floor disorders. Medicine (Baltimore). 2023;102(52):e36636.
  • 24. Ghanem YK, et al. Dr. Google to Dr. ChatGPT: assessing the content and quality of artificial intelligence-generated medical information on appendicitis. Surg Endosc. 2024;38(5):2887-93.
  • 25. Arif TB, Munaf U, Ul-Haque I. The future of medical education and research: Is ChatGPT a blessing or blight in disguise? Med Educ Online. 2023;28(1):2181052.
  • 26. Balel Y. Can ChatGPT be used in oral and maxillofacial surgery? J Stomatol Oral Maxillofac Surg. 2023;124(5):101471.
  • 27. Buldur M, Sezer B. Can Artificial Intelligence Effectively Respond to Frequently Asked Questions About Fluoride Usage and Effects? A Qualitative Study on Chatgpt. Fluoride. 2023;56(3):201-16.
  • 28. Amadeu de Oliveira F, et al. The effect of fluoride on the structure, function, and proteome of intestinal epithelia. Environ Toxico. 2018;33(1):63-71.
  • 29. Melo CGS, et al. Enteric innervation combined with proteomics for the evaluation of the effects of chronic fluoride exposure on the duodenum of rats. Sci Rep. 2017;7(1):1070.
  • 30. Ullah R, Zafar MS, Shahani N. Potential fluoride toxicity from oral medicaments: A review. Iran J Basic Med Sci. 2017;20(8):841-8.
  • 31. Do LG, et al. Early Childhood Exposures to Fluorides and Child Behavioral Development and Executive Function: A Population-Based Longitudinal Study. J Dent Res. 2023;102(1):28-36.
  • 32. Johnston NR, Strobel SA. Principles of fluoride toxicity and the cellular response: a review. Arch Toxicol. 2020;94(4):1051-69.
  • 33. Basch CH, Milano N, Hillyer GC. An assessment of fluoride related posts on Instagram. Health Promot Perspect. 2019;9(1):85-8.
  • 34. Carpiano RM, Chi DL. Parents' attitudes towards topical fluoride and vaccines for children: Are these distinct or overlapping phenomena? Prev Med Rep. 2018; 10:123-8.
  • 35. Association AD Fluoridation FAQs; https://www.ada.org/resources/community-initiatives/fluoride-in-water/fluoridation-faqs
  • 36. Ghahramani Z. Probabilistic machine learning and artificial intelligence. Nature. 2015;521(7553):452-9.
  • 37. Johnson AJ, et al. Evaluation of validity and reliability of AI Chatbots as public sources of information on dental trauma. Dent Traumatol. 2025;41(2):187-93.
  • 38. Alan R, Alan BM. Utilizing ChatGPT-4 for Providing Information on Periodontal Disease to Patients: A DISCERN Quality Analysis. Cureus. 2023;15(9):e46213.
  • 39. Bhattacharyya M, et al. High Rates of Fabricated and Inaccurate References in ChatGPT-Generated Medical Content. Cureus. 2023;15(5): e39238.
  • 40. McInnes N, Haglund BJ. Readability of online health information: implications for health literacy. Inform Health Soc Care. 2011;36(4):173-89.
  • 41. Onder CE, et al. Evaluation of the reliability and readability of ChatGPT-4 responses regarding hypothyroidism during pregnancy. Sci Rep. 2024;14(1):243.
  • 42. Seth I, et al. Comparing the Efficacy of Large Language Models ChatGPT, BARD, and Bing AI in Providing Information on Rhinoplasty: An Observational Study. Aesthet Surg J Open Forum. 2023;5: ojad084.
  • 43. Delikan E, Ertürk-Avunduk T, Aksu S. Approaches of general and specialist dentists to deep caries management: a cross-sectional study from Turkey. J Dent Indonesia. 2021;28(2):94–104.
  • 44. Mishra A, et al. Knowledge, Awareness, Attitudes of Radiation Hazards and Protection Among Dentist: A Questionnaire Survey. J Pharm Bioallied Sci. 2025;17(1):281-3.
  • 45. Dămăşaru MS, et al. Implications of type 1 diabetes mellitus in the etiology and clinic of dento-maxillary anomalies-questionnaire-based evaluation of the dentists’ opinion. Med Pharm Rep. 2025;98(1):135.
Toplam 43 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Koruyucu Sağlık Hizmetleri
Bölüm Araştırma Makalesi
Yazarlar

Seçkin Aksu 0000-0002-5196-215X

Fatma Ayça Bakır 0009-0008-2382-7373

Proje Numarası -
Erken Görünüm Tarihi 27 Eylül 2025
Yayımlanma Tarihi 30 Eylül 2025
Gönderilme Tarihi 20 Temmuz 2025
Kabul Tarihi 10 Eylül 2025
Yayımlandığı Sayı Yıl 2025 Cilt: 15 Sayı: 3

Kaynak Göster

APA Aksu, S., & Bakır, F. A. (2025). Yapay zeka sohbet robotlarının ‘diş hekimliğinde flor’ konusu ile ilgili sorulara verdikleri yanıtların değerlendirilmesi. Mersin Üniversitesi Tıp Fakültesi Lokman Hekim Tıp Tarihi ve Folklorik Tıp Dergisi, 15(3), 1069-1078.
AMA Aksu S, Bakır FA. Yapay zeka sohbet robotlarının ‘diş hekimliğinde flor’ konusu ile ilgili sorulara verdikleri yanıtların değerlendirilmesi. Mersin Üniversitesi Tıp Fakültesi Lokman Hekim Tıp Tarihi ve Folklorik Tıp Dergisi. Eylül 2025;15(3):1069-1078.
Chicago Aksu, Seçkin, ve Fatma Ayça Bakır. “Yapay zeka sohbet robotlarının ‘diş hekimliğinde flor’ konusu ile ilgili sorulara verdikleri yanıtların değerlendirilmesi”. Mersin Üniversitesi Tıp Fakültesi Lokman Hekim Tıp Tarihi ve Folklorik Tıp Dergisi 15, sy. 3 (Eylül 2025): 1069-78.
EndNote Aksu S, Bakır FA (01 Eylül 2025) Yapay zeka sohbet robotlarının ‘diş hekimliğinde flor’ konusu ile ilgili sorulara verdikleri yanıtların değerlendirilmesi. Mersin Üniversitesi Tıp Fakültesi Lokman Hekim Tıp Tarihi ve Folklorik Tıp Dergisi 15 3 1069–1078.
IEEE S. Aksu ve F. A. Bakır, “Yapay zeka sohbet robotlarının ‘diş hekimliğinde flor’ konusu ile ilgili sorulara verdikleri yanıtların değerlendirilmesi”, Mersin Üniversitesi Tıp Fakültesi Lokman Hekim Tıp Tarihi ve Folklorik Tıp Dergisi, c. 15, sy. 3, ss. 1069–1078, 2025.
ISNAD Aksu, Seçkin - Bakır, Fatma Ayça. “Yapay zeka sohbet robotlarının ‘diş hekimliğinde flor’ konusu ile ilgili sorulara verdikleri yanıtların değerlendirilmesi”. Mersin Üniversitesi Tıp Fakültesi Lokman Hekim Tıp Tarihi ve Folklorik Tıp Dergisi 15/3 (Eylül2025), 1069-1078.
JAMA Aksu S, Bakır FA. Yapay zeka sohbet robotlarının ‘diş hekimliğinde flor’ konusu ile ilgili sorulara verdikleri yanıtların değerlendirilmesi. Mersin Üniversitesi Tıp Fakültesi Lokman Hekim Tıp Tarihi ve Folklorik Tıp Dergisi. 2025;15:1069–1078.
MLA Aksu, Seçkin ve Fatma Ayça Bakır. “Yapay zeka sohbet robotlarının ‘diş hekimliğinde flor’ konusu ile ilgili sorulara verdikleri yanıtların değerlendirilmesi”. Mersin Üniversitesi Tıp Fakültesi Lokman Hekim Tıp Tarihi ve Folklorik Tıp Dergisi, c. 15, sy. 3, 2025, ss. 1069-78.
Vancouver Aksu S, Bakır FA. Yapay zeka sohbet robotlarının ‘diş hekimliğinde flor’ konusu ile ilgili sorulara verdikleri yanıtların değerlendirilmesi. Mersin Üniversitesi Tıp Fakültesi Lokman Hekim Tıp Tarihi ve Folklorik Tıp Dergisi. 2025;15(3):1069-78.
Creative Commons Lisansı
                                                                            Bu Dergi Creative Commons Attribution-NonCommercial 4.0 International License ile lisanslanmıştır.

Mersin Üniversitesi Tıp Fakültesi’nin süreli bilimsel yayınıdır. Kaynak gösterilmeden kullanılamaz.  Makalelerin sorumlulukları yazarlara aittir 

Kapak 

Ayşegül Tuğuz

İlter Uzel’inDioskorides ve Öğrencisi adlı eserinden 

Adres

Mersin Üniversitesi Tıp Fakültesi Tıp Tarihi ve Etik  Anabilim Dalı Çiftlikköy Kampüsü

Yenişehir/ Mersin