Araştırma Makalesi
BibTex RIS Kaynak Göster

Klinik Akıl Yürütme Değerlendirme Yöntemleri ve Otomatik Soru Üretimine Yönelik İhtiyaçlar: Uluslararası Bir Anket Çalışması

Yıl 2025, Cilt: 24 Sayı: 74, 73 - 84, 22.12.2025
https://doi.org/10.25282/ted.1681816

Öz

Amaç: Yazılı ölçme-değerlendirme yöntemleri, klinik akıl yürütmenin mezuniyet öncesi sağlık meslekleri eğitim programlarında değerlendirilmesinde yaygın olarak kullanılmaktadır, ancak kullanılan spesifik yaklaşımlar ve eğiticilerin ihtiyaçları hakkında çok az şey bilinmektedir. Özellikle yapay zeka tabanlı otomatik soru üretimi verimlilik açısından umut verici kazanımlar sunsa da, klinik akıl yürütme becerisinin değerlendirmesinde ne ölçüde uygulandığı belirsizliğini korumaktadır. Bu çalışmanın amacı, mezuniyet öncesi dönem sağlık meslekleri eğitiminde yazılı klinik akıl yürütme değerlendirme yöntemleri ve otomatik soru üretimine yönelik ihtiyaçların belirlenmesidir.

Yöntem: Şubat-Haziran 2024 tarihleri arasında uluslararası, web tabanlı bir anket çalışması gerçekleştirildi. Katılımcılar, programlarında yazılı değerlendirme yöntemleri hakkında bilgi sahibi olan sağlık meslekleri eğiticileriydi. Anket; mevcut uygulamalar, algılanan önem, karşılaşılan engeller ve yazılı klinik akıl yürütme değerlendirme yöntemleri ile otomatik soru üretimine ilişkin ihtiyaçları kapsıyordu. Elde edilen verilerin analizinde parametrik ve parametrik olmayan istatistiksel testler kullanıldı.

Bulgular: 33 ülkeden 51 program çalışmaya katıldı. Çoktan seçmeli sorular (ÇSS), eğitim programlarında kullanılan en yaygın (%100’ünde) yöntemdi. Kısa yanıtlı sorular hem biçimlendirici hem de karar verdirici değerlendirmede en önemli yöntem olarak algılanmaktaydı. Yazılı değerlendirme yöntemlerinin kullanımında başlıca engeller, bilgi eksikliği (%81,6) ve zaman yetersizliğiydi (%79,6). En sık değerlendirilen klinik akıl yürütme bileşenleri “ayırıcı tanı” ve “yönetim ve tedavi” iken, etik boyutlar, hasta bakış açısı ve disiplinlerarası iş birliği en az değerlendirilen bileşenlerdi. Programların yalnızca %58,1’i öğrencilere ÇSS sonuçlarına ilişkin geri bildirim sağlamakta olup, bu geri bildirimlerin ayrıntı düzeyi değişkendi. Hem şablon tabanlı hem de şablon dışı (yapay zekâ tabanlı) otomatik soru üretim yöntemlerine ilişkin farkındalık ve kullanım düzeyi düşüktü. Eğiticiler, değerlendirme yöntemleri ve otomatik soru üretimi konusunda eğitimlere katılmaya istekli olduklarını belirttiler.

Sonuç: Çoktan seçmeli sorular yaygın bir şekilde kullanılmakla birlikte, kısa yanıtlı sorular gibi diğer yöntemlerin önemi de eğiticilerce kabul görmektedir. Farklı değerlendirme yöntemlerinden yararlanmada önemli engeller mevcuttur. Tıp fakültelerinde değerlendirme uygulamalarının etkinliğini artırmak için eğitici eğitimi ve otomatik soru üretimi gibi yenilikçi çözümler konusunda daha fazla farkındalık yaratılması gerektiği açıktır.

Kaynakça

  • 1. Huesmann L, Sudacka M, Durning SJ, Georg C, Huwendiek S, Kononowicz AA, et al. Clinical reasoning: What do nurses, physicians, and students reason about. J Interprof Care. 2023 Nov 2;37(6):990–8.
  • 2. Schuwirth LWT, Durning SJ, King SM. Assessment of clinical reasoning: three evolutions of thought. Diagnosis. 2020 Aug 27;7(3):191–6.
  • 3. Elvén M, Welin E, Wiegleb Edström D, Petreski T, Szopa M, Durning SJ, et al. Clinical Reasoning Curricula in Health Professions Education: A Scoping Review. J Med Educ Curric Dev. 2023 Jan;10:23821205231209093.
  • 4. Gupta S, Jackson JM, Appel JL, Ovitsh RK, Oza SK, Pinto-Powell R, et al. Perspectives on the current state of pre-clerkship clinical reasoning instruction in United States medical schools: a survey of clinical skills course directors. Diagnosis. 2022 Feb 1;9(1):59–68.
  • 5. Kononowicz AA, Hege I, Edelbring S, Sobocan M, Huwendiek S, Durning SJ. The need for longitudinal clinical reasoning teaching and assessment: Results of an international survey. Med Teach. 2020 Apr 2;42(4):457–62.
  • 6. Rencic J, Trowbridge RL, Fagan M, Szauter K, Durning S. Clinical Reasoning Education at US Medical Schools: Results from a National Survey of Internal Medicine Clerkship Directors. J Gen Intern Med. 2017 Nov;32(11):1242–6.
  • 7. Daniel M, Rencic J, Durning SJ, Holmboe E, Santen SA, Lang V, et al. Clinical Reasoning Assessment Methods: A Scoping Review and Practical Guidance. Acad Med. 2019 Jun;94(6):902–12.
  • 8. Lang V, Schuwirth L, Durning SJ, Rencic JJ. Assessment of Clinical Reasoning. In: Teaching Clinical Reasoning. American College of Physicians; 2015. p. 117–54.
  • 9. Wagner FL, Sudacka M, Kononowicz AA, Elvén M, Durning SJ, Hege I, et al. Current status and ongoing needs for the teaching and assessment of clinical reasoning – an international mixed-methods study from the students` and teachers` perspective. BMC Med Educ. 2024 Jun 5;24(1):622.
  • 10. Pugh D, De Champlain A, Touchie C. Plus ça change, plus c’est pareil: Making a continued case for the use of MCQs in medical education. Med Teach. 2019 May;41(5):569–77.
  • 11. Gierl MJ, Lai H, Tanygin V. Advanced Methods in Automatic Item Generation. 1st ed. New York: Routledge; 2021.
  • 12. Gierl MJ, Lai H, Turner SR. Using automatic item generation to create multiple-choice test items. Med Educ. 2012;46(8):757–65.
  • 13. Kıyak YS, Emekli E. ChatGPT prompts for generating multiple-choice questions in medical education and evidence on their validity: a literature review. Postgrad Med J. 2024;100(1189):858–65.
  • 14. Khan T, Abimbola S, Kyobutungi C, Pai M. How we classify countries and people—and why it matters. BMJ Glob Health. 2022 Jun;7(6):e009704.
  • 15. Lees N. The Brandt Line after forty years: The more North–South relations change, the more they stay the same? Rev Int Stud. 2021 Jan;47(1):85–106.
  • 16. Parodis I, Andersson L, Durning SJ, Hege I, Knez J, Kononowicz AA, et al. Clinical Reasoning Needs to Be Explicitly Addressed in Health Professions Curricula: Recommendations from a European Consortium. IJERPH. 2021 Oct 25;18(21):11202.
  • 17. Artino AR, La Rochelle JS, Dezee KJ, Gehlbach H. Developing questionnaires for educational research: AMEE Guide No. 87. Med Teach. 2014 Jun;36(6):463–74.
  • 18. Leppink J. Data analysis: more expensive does not imply better. Rev Esp Edu Med. 2022 May 9;3(2):12–3.
  • 19. Laupichler MC, Rother JF, Grunwald Kadow IC, Ahmadi S, Raupach T. Large Language Models in Medical Education: Comparing ChatGPT- to Human-Generated Exam Questions. Acad Med. 2023 Dec 28;99(5):508–12.
  • 20. Kıyak YS, Kononowicz AA. Case-based MCQ generator: A custom ChatGPT based on published prompts in the literature for automatic item generation. Med Teach. 2024;46(8):1018–20.
  • 21. Bala L, Westacott RJ, Brown C, Sam AH. Twelve tips for introducing very short answer questions (VSAQs) into your medical curriculum. Med Teach. 2023 Apr 3;45(4):360–7.
  • 22. Brenner JM, Fulton TB, Kruidering M, Bird JB, Willey J, Qua K, et al. What have we learned about constructed response short-answer questions from students and faculty? A multi-institutional study. Med Teach. 2024 Mar 3;46(3):349–58.
  • 23. Gordon D, Rencic JJ, Lang VJ, Thomas A, Young M, Durning SJ. Advancing the assessment of clinical reasoning across the health professions: Definitional and methodologic recommendations. Perspect Med Educ. 2022 Mar 7;11(2):108–104.
  • 24. Young M, Thomas A, Lubarsky S, Ballard T, Gordon D, Gruppen LD, et al. Drawing Boundaries: The Difficulty in Defining Clinical Reasoning. Acad Med. 2018 Jul;93(7):990–5.
  • 25. Collares CF. Cognitive diagnostic modelling in healthcare professions education: an eye-opener. Adv in Health Sci Educ. 2022 May;27(2):427–40.
  • 26. Masters K. Medical Teacher ’s first ChatGPT’s referencing hallucinations: Lessons for editors, reviewers, and teachers. Med Teach. 2023 Jul 3;45(7):673–5.
  • 27. Kıyak YS, Kononowicz AA. Using a Hybrid of AI and Template-Based Method in Automatic Item Generation to Create Multiple-Choice Questions in Medical Education: Hybrid AIG. JMIR Form Res. 2025 Apr 4;9:e65726–e65726.
  • 28. Kıyak YS, Emekli E, Coşkun Ö, Budakoğlu Iİ. Keeping humans in the loop efficiently by generating question templates instead of questions using AI: Validity evidence on Hybrid AIG. Medical Teacher. 2025;47(4):744–7.
  • 29. Masters K, MacNeil H, Benjamin J, Carver T, Nemethy K, Valanci-Aroesty S, et al. Artificial Intelligence in Health Professions Education assessment: AMEE Guide No. 178. Medical Teacher. 2025 Jan 9;1–15.
  • 30. Kusurkar RA. The leaky pipeline of publications and knowledge generation in medical education. Perspect Med Educ. 2022 Mar 3;11(2):70–2.

Needs for clinical reasoning assessment methods and automatic question generation: An international survey

Yıl 2025, Cilt: 24 Sayı: 74, 73 - 84, 22.12.2025
https://doi.org/10.25282/ted.1681816

Öz

Aim: Written assessment methods are widely used to assess clinical reasoning in undergraduate health professions education, yet little is known about the specific approaches used and educators’ needs. Although automatic question generation, especially AI-based, offers promising efficiency gains, the extent of its application in clinical reasoning assessment remains unclear. The aim of this study is to determine the needs for written clinical reasoning assessment methods and automatic question generation in undergraduate health professions education.

Methods: An international web-based survey was conducted between February and June 2024. Participants were health professions educators knowledgeable about written assessment methods in their programs. The survey covered current practices, perceived importance, barriers, and needs related to written clinical reasoning assessment methods and automatic question generation. Parametric and non-parametric statistical tests were used for the analysis.

Results: 51 programs from 33 countries participated. Multiple-choice questions (MCQs) were the most widely used method (100% of programs). Short-answer questions were perceived as most important for both formative and summative assessment. Main barriers in using the written assessment methods were lack of know-how (81.6%) and insufficient time (79.6%). “Differential diagnosis” and “management and treatment” were the most commonly assessed clinical reasoning components, while ethical aspects, patient perspective, and interprofessional collaboration were the least assessed. Only 58.1% of programs provide feedback on MCQ results to students, with varying levels of detail. Awareness and use of automatic question generation methods were low, both for template-based and non-template-based (artificial intelligence) methods. Educators showed willingness to participate in training on assessment methods and automatic question generation.

Conclusion: While MCQs are ubiquitous, there is recognition of the importance of other methods such as short-answer questions. Significant barriers exist in benefiting from various assessment methods. There is a clear need for faculty development and greater awareness of innovative solutions, such as automatic question generation to enhance efficiency of assessment practices in medical schools.

Etik Beyan

Gazi University Institutional Review Board approved the study on January 11, 2024 (code: 2024 – 56).

Destekleyen Kurum

TÜBİTAK

Kaynakça

  • 1. Huesmann L, Sudacka M, Durning SJ, Georg C, Huwendiek S, Kononowicz AA, et al. Clinical reasoning: What do nurses, physicians, and students reason about. J Interprof Care. 2023 Nov 2;37(6):990–8.
  • 2. Schuwirth LWT, Durning SJ, King SM. Assessment of clinical reasoning: three evolutions of thought. Diagnosis. 2020 Aug 27;7(3):191–6.
  • 3. Elvén M, Welin E, Wiegleb Edström D, Petreski T, Szopa M, Durning SJ, et al. Clinical Reasoning Curricula in Health Professions Education: A Scoping Review. J Med Educ Curric Dev. 2023 Jan;10:23821205231209093.
  • 4. Gupta S, Jackson JM, Appel JL, Ovitsh RK, Oza SK, Pinto-Powell R, et al. Perspectives on the current state of pre-clerkship clinical reasoning instruction in United States medical schools: a survey of clinical skills course directors. Diagnosis. 2022 Feb 1;9(1):59–68.
  • 5. Kononowicz AA, Hege I, Edelbring S, Sobocan M, Huwendiek S, Durning SJ. The need for longitudinal clinical reasoning teaching and assessment: Results of an international survey. Med Teach. 2020 Apr 2;42(4):457–62.
  • 6. Rencic J, Trowbridge RL, Fagan M, Szauter K, Durning S. Clinical Reasoning Education at US Medical Schools: Results from a National Survey of Internal Medicine Clerkship Directors. J Gen Intern Med. 2017 Nov;32(11):1242–6.
  • 7. Daniel M, Rencic J, Durning SJ, Holmboe E, Santen SA, Lang V, et al. Clinical Reasoning Assessment Methods: A Scoping Review and Practical Guidance. Acad Med. 2019 Jun;94(6):902–12.
  • 8. Lang V, Schuwirth L, Durning SJ, Rencic JJ. Assessment of Clinical Reasoning. In: Teaching Clinical Reasoning. American College of Physicians; 2015. p. 117–54.
  • 9. Wagner FL, Sudacka M, Kononowicz AA, Elvén M, Durning SJ, Hege I, et al. Current status and ongoing needs for the teaching and assessment of clinical reasoning – an international mixed-methods study from the students` and teachers` perspective. BMC Med Educ. 2024 Jun 5;24(1):622.
  • 10. Pugh D, De Champlain A, Touchie C. Plus ça change, plus c’est pareil: Making a continued case for the use of MCQs in medical education. Med Teach. 2019 May;41(5):569–77.
  • 11. Gierl MJ, Lai H, Tanygin V. Advanced Methods in Automatic Item Generation. 1st ed. New York: Routledge; 2021.
  • 12. Gierl MJ, Lai H, Turner SR. Using automatic item generation to create multiple-choice test items. Med Educ. 2012;46(8):757–65.
  • 13. Kıyak YS, Emekli E. ChatGPT prompts for generating multiple-choice questions in medical education and evidence on their validity: a literature review. Postgrad Med J. 2024;100(1189):858–65.
  • 14. Khan T, Abimbola S, Kyobutungi C, Pai M. How we classify countries and people—and why it matters. BMJ Glob Health. 2022 Jun;7(6):e009704.
  • 15. Lees N. The Brandt Line after forty years: The more North–South relations change, the more they stay the same? Rev Int Stud. 2021 Jan;47(1):85–106.
  • 16. Parodis I, Andersson L, Durning SJ, Hege I, Knez J, Kononowicz AA, et al. Clinical Reasoning Needs to Be Explicitly Addressed in Health Professions Curricula: Recommendations from a European Consortium. IJERPH. 2021 Oct 25;18(21):11202.
  • 17. Artino AR, La Rochelle JS, Dezee KJ, Gehlbach H. Developing questionnaires for educational research: AMEE Guide No. 87. Med Teach. 2014 Jun;36(6):463–74.
  • 18. Leppink J. Data analysis: more expensive does not imply better. Rev Esp Edu Med. 2022 May 9;3(2):12–3.
  • 19. Laupichler MC, Rother JF, Grunwald Kadow IC, Ahmadi S, Raupach T. Large Language Models in Medical Education: Comparing ChatGPT- to Human-Generated Exam Questions. Acad Med. 2023 Dec 28;99(5):508–12.
  • 20. Kıyak YS, Kononowicz AA. Case-based MCQ generator: A custom ChatGPT based on published prompts in the literature for automatic item generation. Med Teach. 2024;46(8):1018–20.
  • 21. Bala L, Westacott RJ, Brown C, Sam AH. Twelve tips for introducing very short answer questions (VSAQs) into your medical curriculum. Med Teach. 2023 Apr 3;45(4):360–7.
  • 22. Brenner JM, Fulton TB, Kruidering M, Bird JB, Willey J, Qua K, et al. What have we learned about constructed response short-answer questions from students and faculty? A multi-institutional study. Med Teach. 2024 Mar 3;46(3):349–58.
  • 23. Gordon D, Rencic JJ, Lang VJ, Thomas A, Young M, Durning SJ. Advancing the assessment of clinical reasoning across the health professions: Definitional and methodologic recommendations. Perspect Med Educ. 2022 Mar 7;11(2):108–104.
  • 24. Young M, Thomas A, Lubarsky S, Ballard T, Gordon D, Gruppen LD, et al. Drawing Boundaries: The Difficulty in Defining Clinical Reasoning. Acad Med. 2018 Jul;93(7):990–5.
  • 25. Collares CF. Cognitive diagnostic modelling in healthcare professions education: an eye-opener. Adv in Health Sci Educ. 2022 May;27(2):427–40.
  • 26. Masters K. Medical Teacher ’s first ChatGPT’s referencing hallucinations: Lessons for editors, reviewers, and teachers. Med Teach. 2023 Jul 3;45(7):673–5.
  • 27. Kıyak YS, Kononowicz AA. Using a Hybrid of AI and Template-Based Method in Automatic Item Generation to Create Multiple-Choice Questions in Medical Education: Hybrid AIG. JMIR Form Res. 2025 Apr 4;9:e65726–e65726.
  • 28. Kıyak YS, Emekli E, Coşkun Ö, Budakoğlu Iİ. Keeping humans in the loop efficiently by generating question templates instead of questions using AI: Validity evidence on Hybrid AIG. Medical Teacher. 2025;47(4):744–7.
  • 29. Masters K, MacNeil H, Benjamin J, Carver T, Nemethy K, Valanci-Aroesty S, et al. Artificial Intelligence in Health Professions Education assessment: AMEE Guide No. 178. Medical Teacher. 2025 Jan 9;1–15.
  • 30. Kusurkar RA. The leaky pipeline of publications and knowledge generation in medical education. Perspect Med Educ. 2022 Mar 3;11(2):70–2.
Toplam 30 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Tıp Eğitimi
Bölüm Araştırma Makalesi
Yazarlar

Yavuz Selim Kıyak 0000-0002-5026-3234

Steven Durning 0000-0001-5223-1597

Sören Huwendiek 0000-0001-6116-9633

Felicitas Lony Wagner 0009-0002-1634-6359

Andrzej Kononowicz 0000-0003-2956-2093

Gönderilme Tarihi 28 Nisan 2025
Kabul Tarihi 1 Temmuz 2025
Yayımlanma Tarihi 22 Aralık 2025
Yayımlandığı Sayı Yıl 2025 Cilt: 24 Sayı: 74

Kaynak Göster

Vancouver Kıyak YS, Durning S, Huwendiek S, Wagner FL, Kononowicz A. Needs for clinical reasoning assessment methods and automatic question generation: An international survey. TED. 2025;24(74):73-84.