Research Article
BibTex RIS Cite
Year 2015, Special Issue 2015 II, 110 - 116, 01.12.2015
https://doi.org/10.17275/per.15.spi.2.13

Abstract

References

  • Anastasi, A. (1997). Psychological Testing (7h Ed.). New York: Macmillan Publishing. Aydın, A. Çoktan seçmeli ölçme sonuçlarının bilgisayar yardımıyla analizi. (Unpublished master’s thesis). Afyon Kocatepe University, Afyonkarahisar.
  • Çakan, M. (2004). Öğretmenlerin Ölçme-Değerlendirme Uygulamaları ve Yeterlik Düzeyleri: İlk ve Ortaöğretim. Journal of Faculty of Educational Sciences, 37(2), 99-114.
  • Çelikkaya, K., Karakuş, U., & Öztürk Demirbaş, Ç. (2010). Sosyal Bilgiler Öğretmenlerinin Ölçme- Değerlendirme Araçlarını Kullanma Düzeyleri ve Karsılastıkları Sorunlar. Ahi Evran Üniversitesi Eğitim Fakültesi Dergisi, 11(1), 57-76.
  • Hamzah, M., & Abdullah, S. (2011). Test Item Analysis: An Educator Professionalism Approach. US-China Education Review(3), 207-322.
  • Kuran, K. (2009). Alternatif Ölçme Değerlendirme Teknikleri Konusunda SInıf Öğretmenlerinin Görüşlerinin Değerlendirilmesi. Mustafa Kemal Üniversitesi Sosyal Bilimler Enstitüsü Dergisi, 6(12), 209-234.
  • Osterlind, S. (2002). Constructing Test Items: Multiple-Choice, Constructed-Response, Performance, and Other Formats. New York: Kluwer Academic Publishers.
  • Siri, A., & Freddano, M. (2011). The use of item analysis for the improvement of objective examinations. Procedia - Social and Behavioral Sciences, 29, 188-197.
  • Swanson, D., Holtzman, K., Clauser, B., & Sawhill, A. (2005). Psychometric characteristics and response times for one-best-answer questions in relation to number and sources of options. Acad Med(80), 93-96.
  • Tomak, L., & Bek, Y. (2015). Item analysis and evaluation in the examinations in the faculty of medicine at Ondokuz Mayis University. Nigerian Journal of Clinical Practice, 18(3). doi:10.4103/1119-3077.151720
  • Xu, Y., & Liu, Y. (2009). Teacher assessment knowledge and practice: a narrative inquiry of a chinese college EFL. TESOL Quarterly, 43(3), 493-513.
  • Yaman, S., & Karamustafaoğlu, S. (2011). Investigating prospective teachers’ perceived levels of efficacy towards measurement and evaluation. Journal of Faculty of Educational Sciences, 44(2), 53-72.
  • Yang, S., Tsou, M., Chen, E., Chan, K., & Chang, K. (2011). Statistical item analysis of the examination in anesthesiology for medical students using the Rasch model. Journal of the Chinese Medical Association(74).
  • Yurdugül, H., & Van Batenburg, T. (2006). Item Difficulty From Graphical Item Analysis. Eurasian Journal of Educational Research(24), 209-218.

Computer Aided Analysis of Multiple Choice Test Results

Year 2015, Special Issue 2015 II, 110 - 116, 01.12.2015
https://doi.org/10.17275/per.15.spi.2.13

Abstract

One of the most widely used assessment technique in educational institutions are the multiple-choice tests. Several analyses have to be made in order to determine the validity and reliability of these multiple-choice tests and items in the test. In order to make some comments about multiple choice tests, test’s average, test’s reliability, mean difficulty, standard deviance, measures of central tendency, measures of central distribution should be computed. And also to make some comments about multiple choice tests’ items, Item Difficulty index, Item Discrimination Index, item variance and standard deviance, item reliability index should be computed. These computations are time-consuming and hard to do by hand. Also even if data may be entered in a spreadsheet, formulas can be hard for a teacher to form in the software. To make comments about the produced values is also a hard point for educators. As a result, teachers in educational systems don’t/can’t do evaluations about the assessments they applied. In this study, a software has been developed for the statistical evaluation of multiple-choice tests’ results. With this software, test and item analysis of the multiple-choice exam can be done and also statistical results can be presented to the user by colorized graphics. Examinees’ scores, frequency table and analyses about the test (range, mean, median, Kr20, test’s mean difficulty, standard deviance, variance, coefficient of variation, and coefficient of skewness), every item’s Item Difficulty index, Item Discrimination Index, item variance and standard deviance, item reliability index, Point-Biserial Correlations are the main outputs of the software. Also distracters in choices can be seen easily in the graphics section. Also there is an info box in the developed software. The info box shows several information about the computed properties and their values. This box can be helpful for users who have limited information about these statistics

References

  • Anastasi, A. (1997). Psychological Testing (7h Ed.). New York: Macmillan Publishing. Aydın, A. Çoktan seçmeli ölçme sonuçlarının bilgisayar yardımıyla analizi. (Unpublished master’s thesis). Afyon Kocatepe University, Afyonkarahisar.
  • Çakan, M. (2004). Öğretmenlerin Ölçme-Değerlendirme Uygulamaları ve Yeterlik Düzeyleri: İlk ve Ortaöğretim. Journal of Faculty of Educational Sciences, 37(2), 99-114.
  • Çelikkaya, K., Karakuş, U., & Öztürk Demirbaş, Ç. (2010). Sosyal Bilgiler Öğretmenlerinin Ölçme- Değerlendirme Araçlarını Kullanma Düzeyleri ve Karsılastıkları Sorunlar. Ahi Evran Üniversitesi Eğitim Fakültesi Dergisi, 11(1), 57-76.
  • Hamzah, M., & Abdullah, S. (2011). Test Item Analysis: An Educator Professionalism Approach. US-China Education Review(3), 207-322.
  • Kuran, K. (2009). Alternatif Ölçme Değerlendirme Teknikleri Konusunda SInıf Öğretmenlerinin Görüşlerinin Değerlendirilmesi. Mustafa Kemal Üniversitesi Sosyal Bilimler Enstitüsü Dergisi, 6(12), 209-234.
  • Osterlind, S. (2002). Constructing Test Items: Multiple-Choice, Constructed-Response, Performance, and Other Formats. New York: Kluwer Academic Publishers.
  • Siri, A., & Freddano, M. (2011). The use of item analysis for the improvement of objective examinations. Procedia - Social and Behavioral Sciences, 29, 188-197.
  • Swanson, D., Holtzman, K., Clauser, B., & Sawhill, A. (2005). Psychometric characteristics and response times for one-best-answer questions in relation to number and sources of options. Acad Med(80), 93-96.
  • Tomak, L., & Bek, Y. (2015). Item analysis and evaluation in the examinations in the faculty of medicine at Ondokuz Mayis University. Nigerian Journal of Clinical Practice, 18(3). doi:10.4103/1119-3077.151720
  • Xu, Y., & Liu, Y. (2009). Teacher assessment knowledge and practice: a narrative inquiry of a chinese college EFL. TESOL Quarterly, 43(3), 493-513.
  • Yaman, S., & Karamustafaoğlu, S. (2011). Investigating prospective teachers’ perceived levels of efficacy towards measurement and evaluation. Journal of Faculty of Educational Sciences, 44(2), 53-72.
  • Yang, S., Tsou, M., Chen, E., Chan, K., & Chang, K. (2011). Statistical item analysis of the examination in anesthesiology for medical students using the Rasch model. Journal of the Chinese Medical Association(74).
  • Yurdugül, H., & Van Batenburg, T. (2006). Item Difficulty From Graphical Item Analysis. Eurasian Journal of Educational Research(24), 209-218.
There are 13 citations in total.

Details

Primary Language English
Subjects Studies on Education
Journal Section Research Articles
Authors

Ertuğrul Ergün

Ali Aydın This is me

Publication Date December 1, 2015
Acceptance Date October 30, 2015
Published in Issue Year 2015 Special Issue 2015 II

Cite

APA Ergün, E., & Aydın, A. (2015). Computer Aided Analysis of Multiple Choice Test Results. Participatory Educational Research, 2(5), 110-116. https://doi.org/10.17275/per.15.spi.2.13