Research Article
BibTex RIS Cite

Assessment of Medicine Faculty Biostatistics Exam with Different Models

Year 2017, Volume: 34 Issue: 4, 257 - 262, 10.10.2019

Abstract



The most important goal of
a test is having reliable and valid measurement tools. The purpose of this
study is to conduct item analysis to questions for an exam which measures the
biostatistics knowledge of students and to assess the Biostatistics Program.



The study group consists of
a total of 261 students in their second year of Ondokuz Mayıs University
Faculty of Medicine. 132 (50.6%) of the students are female, while 129 (49.4%)
are male. Item analysis was assessed by using classical method and Rasch
analysis. The average value of the test which consisted of a total of 60
multiple choice questions was
47.47± 6.99; the
lowest score was 15, while the highest score was 57. KR 20 value was found as
0.86. When all the questions were analyzed with Rasch analysis, item difficulty
of 75% of the items was between -1.60 and 1.60.



As a conclusion, the exam was found to be
reliable and it was shown to be a moderately difficult exam which assessed the
knowledge of the students. Future studies are planned to assess Biostatistics
teaching in different levels of class.




References

  • Referans 1 Abdalla, M.E., 2011. What does item analysis tell us? Factors affecting the reliability of multiple choice questions. Gezira J Health Sci.7,17‑25.
  • Referans 2 Abozaid, H., Park, Y.S., Tekian, A., 2017. Peer review improves psychometric characteristics of multiple choice questions. Med Teach. 39(1), 50-54.
  • Referans 3 Allen, D.D., 2012. Validity and reliabilityof the movement ability measue: A self-report instrument proposed for assessing movement across diagnoses and ability levels. Physical Thearapy. 87(7), 899-916.
  • Referans 4 Andrich, D., Sheridan, B., Luo, G., 2012. RUMM 2030 Version 5.4 for windows. RUMM Laboratory Pty Ltd.
  • Referans 5 Atılgan, H., Kan, A., Doğan, N., 2011, Eğitimde Ölçme ve Değerlendirme (5.Baskı). Anı Yayıncılık, Ankara.
  • Referans 6 Biswas, S.S., Jain, V., Agrawal, V., Bindra, M., 2015. Small group learning: Effect on item analysis and accuracy of self‑sssessment of medical students. Education for Health. 28(1), 16-21.
  • Referans 7 Brennan, R.L., 2011. Generalizability theory and classical test theory. Applied Measurement in Education. 24, 1-21.
  • Referans 8 Brookhart, S., 2015. Making the most of multiple choice. Educ Leadership. 73(1), 36-39.
  • Referans 9 Cappelleri, J.C., Lundy, J.J., Hays, R.D., 2014. Overview of classical test theory and item response theory for quantitative assessment of items in developing patientreported outcome measures. Clin Ther. 5,648-662.
  • Referans 10 Collins, J., 2006. Education techniques for lifelong learning: writing multiple-choice questions for continuing medical education activities and self-assessment modules. Radiographics. 26(2),543-551.
  • Referans 11 Crocker, L., Algina, J., 2008. Introduction to classical and modern test theory. Cengage Learning, USA.
  • Referans 12 Deepak, K.K., Al-Umran, K.U., AI-Sheikh, M.H., Dkoli, B.V., Al-Rubaish, A., 2015. Psychometrics of multiple choice questions with nonfunctioning distracters: Implications to medical education. Indian J Physiol Pharmacol. 59(4), 428-35.
  • Referans 13 De Grutijter, D.N., Van der Kamp, L.J., 2008, Statistical test theory for the behavioral sciences. Chapman&Hall, London.
  • Referans 14 Demars, C., 2010, Item Response Theory. Oxford Universtiy, New York.
  • Referans 15 DeVellis, R.F., 2006. Classical test theory. Med Care. 44(11), 50-59.
  • Referans 16 Epstein, R.M., 2007. Assessment in medical education. N Engl J Med. 356(4), 387-96.
  • Referans 17 Hingorjo, M.R., Jaleel, F., 2012. Analysis of one‑best MCQs: The difficulty index, discrimination index and distractor efficiency. J Pak Med Assoc. 62,142‑147.
  • Referans 18 Hintze, J., 2007. NCSS and GESS. Kaysville, Utah.
  • Referans 19 Kilgour, J.M., Tayyaba, S., 2016. An investigation into the optimal number of distractors in single-best answer exams. Adv Health Sci Educ Theory Pract. 21, 571-585.
  • Referans 20 Li, J., Liu, H., Liu, H., Feng, T., Cai, Y., 2011. Psychometric assessment of HIV/STI sexual risk scale among MSM: a Rasch model approach. BMC Public Health. 11(763), 1-8.
  • Referans 21 Mukherjee, P., Lahiri, S.K., 2015. Analysis of multiple choice questions (MCQs): Item and test statistics from an assessment in a medical college of Kolkata, West Bengal. IOSR J Dent Med Sci. 1, 47-52.
  • Referans 22 Pallant, J., 2016, Scale development, Rasch analysis and item response theory. Melbourne Australian consortium for Social & Political Research inc (ACSPRI), Melbourne: ACCPRI.
  • Referans 23 Petrillo, J., Cano, S.J., McLeod, L.D., Coon, C.D., 2015. Using classical test theory, item response theory, and Rasch measurement theory to evaluate patientreported outcome measures: a comparison of worked examples. Value Health. 1, 25-34.
  • Referans 24 Sim, S.M., Rasiah, R.I., 2006. Relation between item difficulty and discrimination indices in true/false type multiple choice questions of a para-clinical multidisciplinary paper. Annals Academy of Medicine. 35(2), 67-71.
  • Referans 25 Tarrant, M., Ware, J., Mohammed, A.M., 2009. An assessment of functioning and non-functioning distractors in multiple-choice questions: A descriptive analysis. BMC Med Educ. 9(40), 1-8.
  • Referans 26 Tomak, L., Bek, Y., 2015. Item analysis and evaluation in the examinations in the faculty of medicine at Ondokuz Mayis University. Niger J Clin Pract. 18(3), 387-394.
  • Referans 27 Trakman, G.L., Forsyth, A., Hoye, R., Belski, R., 2017. The nutrition for sport knowledge questionnaire (NSKQ): Development and validation using classical test theory and Rasch analysis. J Int Soc Sports Nutr. 3,14-26.
  • Referans 28 Vegada, B., Shukla, A., Khilnani, A., Charan, J., Desai, C., 2016. Comparison between three option, four option and five option multiple choice question tests for quality parameters: A randomized study. Indian J Pharmacol. 48(5), 571-575.
  • Referans 29 Zaman, A., Niwaz, A., Faize, F.A., Dahar, M.A., 2010. Analysis of multiple choice items and the effect ofitems' sequencing on difficulty level in the test of mathematics. European Journal of Social Sciences. 17(1), 61-67.
Year 2017, Volume: 34 Issue: 4, 257 - 262, 10.10.2019

Abstract

References

  • Referans 1 Abdalla, M.E., 2011. What does item analysis tell us? Factors affecting the reliability of multiple choice questions. Gezira J Health Sci.7,17‑25.
  • Referans 2 Abozaid, H., Park, Y.S., Tekian, A., 2017. Peer review improves psychometric characteristics of multiple choice questions. Med Teach. 39(1), 50-54.
  • Referans 3 Allen, D.D., 2012. Validity and reliabilityof the movement ability measue: A self-report instrument proposed for assessing movement across diagnoses and ability levels. Physical Thearapy. 87(7), 899-916.
  • Referans 4 Andrich, D., Sheridan, B., Luo, G., 2012. RUMM 2030 Version 5.4 for windows. RUMM Laboratory Pty Ltd.
  • Referans 5 Atılgan, H., Kan, A., Doğan, N., 2011, Eğitimde Ölçme ve Değerlendirme (5.Baskı). Anı Yayıncılık, Ankara.
  • Referans 6 Biswas, S.S., Jain, V., Agrawal, V., Bindra, M., 2015. Small group learning: Effect on item analysis and accuracy of self‑sssessment of medical students. Education for Health. 28(1), 16-21.
  • Referans 7 Brennan, R.L., 2011. Generalizability theory and classical test theory. Applied Measurement in Education. 24, 1-21.
  • Referans 8 Brookhart, S., 2015. Making the most of multiple choice. Educ Leadership. 73(1), 36-39.
  • Referans 9 Cappelleri, J.C., Lundy, J.J., Hays, R.D., 2014. Overview of classical test theory and item response theory for quantitative assessment of items in developing patientreported outcome measures. Clin Ther. 5,648-662.
  • Referans 10 Collins, J., 2006. Education techniques for lifelong learning: writing multiple-choice questions for continuing medical education activities and self-assessment modules. Radiographics. 26(2),543-551.
  • Referans 11 Crocker, L., Algina, J., 2008. Introduction to classical and modern test theory. Cengage Learning, USA.
  • Referans 12 Deepak, K.K., Al-Umran, K.U., AI-Sheikh, M.H., Dkoli, B.V., Al-Rubaish, A., 2015. Psychometrics of multiple choice questions with nonfunctioning distracters: Implications to medical education. Indian J Physiol Pharmacol. 59(4), 428-35.
  • Referans 13 De Grutijter, D.N., Van der Kamp, L.J., 2008, Statistical test theory for the behavioral sciences. Chapman&Hall, London.
  • Referans 14 Demars, C., 2010, Item Response Theory. Oxford Universtiy, New York.
  • Referans 15 DeVellis, R.F., 2006. Classical test theory. Med Care. 44(11), 50-59.
  • Referans 16 Epstein, R.M., 2007. Assessment in medical education. N Engl J Med. 356(4), 387-96.
  • Referans 17 Hingorjo, M.R., Jaleel, F., 2012. Analysis of one‑best MCQs: The difficulty index, discrimination index and distractor efficiency. J Pak Med Assoc. 62,142‑147.
  • Referans 18 Hintze, J., 2007. NCSS and GESS. Kaysville, Utah.
  • Referans 19 Kilgour, J.M., Tayyaba, S., 2016. An investigation into the optimal number of distractors in single-best answer exams. Adv Health Sci Educ Theory Pract. 21, 571-585.
  • Referans 20 Li, J., Liu, H., Liu, H., Feng, T., Cai, Y., 2011. Psychometric assessment of HIV/STI sexual risk scale among MSM: a Rasch model approach. BMC Public Health. 11(763), 1-8.
  • Referans 21 Mukherjee, P., Lahiri, S.K., 2015. Analysis of multiple choice questions (MCQs): Item and test statistics from an assessment in a medical college of Kolkata, West Bengal. IOSR J Dent Med Sci. 1, 47-52.
  • Referans 22 Pallant, J., 2016, Scale development, Rasch analysis and item response theory. Melbourne Australian consortium for Social & Political Research inc (ACSPRI), Melbourne: ACCPRI.
  • Referans 23 Petrillo, J., Cano, S.J., McLeod, L.D., Coon, C.D., 2015. Using classical test theory, item response theory, and Rasch measurement theory to evaluate patientreported outcome measures: a comparison of worked examples. Value Health. 1, 25-34.
  • Referans 24 Sim, S.M., Rasiah, R.I., 2006. Relation between item difficulty and discrimination indices in true/false type multiple choice questions of a para-clinical multidisciplinary paper. Annals Academy of Medicine. 35(2), 67-71.
  • Referans 25 Tarrant, M., Ware, J., Mohammed, A.M., 2009. An assessment of functioning and non-functioning distractors in multiple-choice questions: A descriptive analysis. BMC Med Educ. 9(40), 1-8.
  • Referans 26 Tomak, L., Bek, Y., 2015. Item analysis and evaluation in the examinations in the faculty of medicine at Ondokuz Mayis University. Niger J Clin Pract. 18(3), 387-394.
  • Referans 27 Trakman, G.L., Forsyth, A., Hoye, R., Belski, R., 2017. The nutrition for sport knowledge questionnaire (NSKQ): Development and validation using classical test theory and Rasch analysis. J Int Soc Sports Nutr. 3,14-26.
  • Referans 28 Vegada, B., Shukla, A., Khilnani, A., Charan, J., Desai, C., 2016. Comparison between three option, four option and five option multiple choice question tests for quality parameters: A randomized study. Indian J Pharmacol. 48(5), 571-575.
  • Referans 29 Zaman, A., Niwaz, A., Faize, F.A., Dahar, M.A., 2010. Analysis of multiple choice items and the effect ofitems' sequencing on difficulty level in the test of mathematics. European Journal of Social Sciences. 17(1), 61-67.
There are 29 citations in total.

Details

Journal Section Experimental Research
Authors

Leman Tomak

Publication Date October 10, 2019
Submission Date September 5, 2017
Acceptance Date December 8, 2017
Published in Issue Year 2017 Volume: 34 Issue: 4

Cite

APA Tomak, L. (2019). Assessment of Medicine Faculty Biostatistics Exam with Different Models. Journal of Experimental and Clinical Medicine, 34(4), 257-262.
AMA Tomak L. Assessment of Medicine Faculty Biostatistics Exam with Different Models. J. Exp. Clin. Med. October 2019;34(4):257-262.
Chicago Tomak, Leman. “Assessment of Medicine Faculty Biostatistics Exam With Different Models”. Journal of Experimental and Clinical Medicine 34, no. 4 (October 2019): 257-62.
EndNote Tomak L (October 1, 2019) Assessment of Medicine Faculty Biostatistics Exam with Different Models. Journal of Experimental and Clinical Medicine 34 4 257–262.
IEEE L. Tomak, “Assessment of Medicine Faculty Biostatistics Exam with Different Models”, J. Exp. Clin. Med., vol. 34, no. 4, pp. 257–262, 2019.
ISNAD Tomak, Leman. “Assessment of Medicine Faculty Biostatistics Exam With Different Models”. Journal of Experimental and Clinical Medicine 34/4 (October 2019), 257-262.
JAMA Tomak L. Assessment of Medicine Faculty Biostatistics Exam with Different Models. J. Exp. Clin. Med. 2019;34:257–262.
MLA Tomak, Leman. “Assessment of Medicine Faculty Biostatistics Exam With Different Models”. Journal of Experimental and Clinical Medicine, vol. 34, no. 4, 2019, pp. 257-62.
Vancouver Tomak L. Assessment of Medicine Faculty Biostatistics Exam with Different Models. J. Exp. Clin. Med. 2019;34(4):257-62.