Research Article

The Effect of Item Pools of Different Strengths on the Test Results of Computerized-Adaptive Testing

Volume: 8 Number: 1 March 15, 2021
EN TR

The Effect of Item Pools of Different Strengths on the Test Results of Computerized-Adaptive Testing

Abstract

Item response theory provides various important advantages for exams carried out or to be carried out digitally. For computerized adaptive tests to be able to make valid and reliable predictions supported by IRT, good quality item pools should be used. This study examines how adaptive test applications vary in item pools which consist of items with varying difficulty levels. Within the scope of the study, the impact of items was examined where the parameter b differentiates while the parameters a and c are kept in fixed range. To this end, eight different 2000-people item pools were designed in simulation which consist of 500 items with ability scores and varying difficulty levels. As a result of CAT simulations, RMSD, BIAS and test lengths were examined. At the end of the study, it was found that tests run by item pools with parameter b in the range that matches the ability level end up with fewer items and have a more accurate stimation. When parameter b takes value in a narrower range, estimation of ability for extreme ability values that are not consistent with parameter b required more items. It was difficult to make accurate estimations for individuals with high ability levels especially in test applications conducted with an item pool that consists of easy items, and for individuals with low ability levels in test applications conducted with an item pool consisting of difficult items.

Keywords

References

  1. Adams, R. (2005). Reliabilty as a measurement design effect. Studies in Educational Evaluation, 31, 162–172.
  2. Baker, F. B., & Kim, S. H. (2004). Item response theory: Parameter estimation techniques. Marcel Bekker Inc.
  3. Boyd, A. M., Dodd, B. & Fitzpatrick, S. (2013). A comparison of exposure control procedures in cat systems based on different measurement models for testlets. Applied Measurement in Education, 26(2), 113-115.
  4. Brown, J. M., & Weiss, D. J. (1977). An adaptive testing strategy for achievement test batteries (Research Rep. No. 77-6). University of Minnesota, Department of Psychology, Psychometric Methods Program.
  5. Bulut, O., & Kan, A. (2012) Application of computerized adaptive testing to entrance examination for graduate studies in Turkey. Egitim Arastirmalari-Eurasian Journal of Educational Research, 49, 61–80.
  6. Chang, H. H. (2014). Psychometrics behind computerized adaptive testing. Psychometrika, 1-20.
  7. Cömert, M. (2008). Bireye uyarlanmış bilgisayar destekli ölçme ve değerlendirme yazılımı geliştirilmesi [Computer-aided assessment and evaluation analysis adapted to the individual] [Unpublished master’s thesis]. Bahçeşehir University.
  8. Crocker, L., & Algina, J. (1986). Introduction classical and modern test theory. Harcourt Brace Javonovich College Publishers.

Details

Primary Language

English

Subjects

Studies on Education

Journal Section

Research Article

Publication Date

March 15, 2021

Submission Date

May 10, 2020

Acceptance Date

January 11, 2021

Published in Issue

Year 2021 Volume: 8 Number: 1

APA
Kezer, F. (2021). The Effect of Item Pools of Different Strengths on the Test Results of Computerized-Adaptive Testing. International Journal of Assessment Tools in Education, 8(1), 145-155. https://doi.org/10.21449/ijate.735155

Cited By

23823             23825             23824