Research Article

Evaluating knowledge levels of students with a Computerized Adaptive Test

Volume: 5 Number: 2 December 28, 2023
EN

Evaluating knowledge levels of students with a Computerized Adaptive Test

Abstract

The present study aims to develop and evaluate a CAT system for multiple-choice success tests. In this regard the measurement results obtained from the CAT system based on Item Response Theory (IRT) and the test mode based on the Classical Test Theory (CTT) were compared in terms of the knowledge level of the students, reliability of measurement and number of items. The study was conducted on 873 students from a state-university in Turkey. The research took three years three years and involved three phases. According to the findings of the present study, with the developed CAT system, academic success levels of the students were determined with very high reliability. When all the items in the test were scored according to both constructs, the relationship between the scores obtained was found to be high and meaningful in a positive direction. Moreover, there is a high, positive, significant relationship between students’ levels of knowledge that are estimated in CAT systems and the score of an achievement test they received from a Paper & Pencil test. Finally, the number of questions the students were exposed to in the CAT system was reduced by 50% compared to the test based on the CTT.

Keywords

computerized adaptive test , individualized testing , item response theory , paper-and-pencil test , number of test items , intelligent tutoring systems

References

  1. Binet, A., & Simon, T. A. (1905). Méthode nouvelle pour le diagnostic du niveau intellectuel des anormaux. L’Année Psychologique, 11, 191-244.
  2. Chalmers, R. P. (2016). Generating Adaptive and Non-Adaptive Test Interfaces for Multidimensional Item Response Theory Applications. Journal of Statistical Software, 71(5), 1-39. http://doi.org/10.18637/jss.v071.i05.
  3. Chao, R.-C., Kuo, B.-C., & Tsai, Y.-H. (2015). Development of Chinese Computerized Adaptive Test System Based on Higher-Order Item Response Theory. International Journal of Innovative Computing, Information and Control, 11(1), 57–76.
  4. Chen, C. M., & Chung, C. J. (2008). Personalized mobile English vocabulary learning system based on item response theory and learning memory cycle. Computers & Education, 51(2), 624–645. http://doi.org/10.1016/j.compedu.2007.06.011.
  5. Cisar, S. M., Cisar, P., & Pinter, R. (2016). Evaluation of knowledge in Object Oriented Programming course with computer adaptive tests c Cisar. Computers & Education, 93, 142–160. http://doi.org/10.1016/j.compedu.2015.10.016.
  6. Crocker, L., & Algina, J. (1986). Introduction Classical and Modern Test Theory. Holt, Rinehart and Winston, 6277 Sea Harbor Drive, Orlando, FL 32887.
  7. Çelen, Ü. & Aybek, E. C. (2013). Examination of the correlation between student achievements and the items that construct the test according to the Classical Test Theory and Item Response Theory. Journal of Measurement and Evaluation in Education and Psychology, 4(2), 64–75.
  8. Embretson, E. & Reise, S. P. (2000). Item response theory for psychologist principles and application. London: Lawrence Erlbaum Assc.
  9. Gardner, W., Shear, K., Kelleher, K., Pajer, K., Mammen, O., Buysse, D., & Frank, E. (2004). Computerized adaptive measurement of depression: A simulation study. BMC Psychiatry, 4(1), 13-23.
  10. Hambleton, R. K., Swaminathan, H., & Rogers, H. J. (1991). Fundamentals of item response theory. California: Sage Publications.
APA
Yağcı, M. (2023). Evaluating knowledge levels of students with a Computerized Adaptive Test. Journal of Teacher Education and Lifelong Learning, 5(2), 921-932. https://doi.org/10.51535/tell.1280384