Item response theory provides various important advantages for exams carried out or to be carried out digitally. For computerized adaptive tests to be able to make valid and reliable predictions supported by IRT, good quality item pools should be used. This study examines how adaptive test applications vary in item pools which consist of items with varying difficulty levels. Within the scope of the study, the impact of items was examined where the parameter b differentiates while the parameters a and c are kept in fixed range. To this end, eight different 2000-people item pools were designed in simulation which consist of 500 items with ability scores and varying difficulty levels. As a result of CAT simulations, RMSD, BIAS and test lengths were examined. At the end of the study, it was found that tests run by item pools with parameter b in the range that matches the ability level end up with fewer items and have a more accurate stimation. When parameter b takes value in a narrower range, estimation of ability for extreme ability values that are not consistent with parameter b required more items. It was difficult to make accurate estimations for individuals with high ability levels especially in test applications conducted with an item pool that consists of easy items, and for individuals with low ability levels in test applications conducted with an item pool consisting of difficult items.
Primary Language | English |
---|---|
Subjects | Studies on Education |
Journal Section | Articles |
Authors | |
Publication Date | March 15, 2021 |
Submission Date | May 10, 2020 |
Published in Issue | Year 2021 |