Research Article

Effect of Routing Methods on the Performance of Multi-Stage Tests

Volume: 2022 Number: 19 October 30, 2022
EN TR

Effect of Routing Methods on the Performance of Multi-Stage Tests

Abstract

In recent decades, thanks to the great advances and growing opportunities in the technology world, computer-based testing has become a popular alternative to the traditional fixed-item paper-pencil tests. Specifically, multi-stage tests (MST) which is a kind of CBT and an algorithm-based approach become a viable alternative to traditional fixed-item tests with important measurement advantages they provided. This study aimed to examine the effect of different routing rules and scoring methods under different ability distributions in MSTs. For this purpose, three different routing rule, three different ability estimation methods, and two different ability distributions were manipulated in a simulation design. Although there was no clear best method in the studied conditions, it was seen that the Kullback-Leibler was the most efficient routing method and worked best with the EAP scoring method in most of the conditions. Furthermore, EAP and BM provided higher measurement efficiency than the ML method. Recommendations for using those routing methods were provided and suggestions were made for further research.

Keywords

Multi-Stage Testing , Routing , Ability Estimation

References

  1. Diao, Q., & Reckase, M. (2009). Comparison of ability estimation and item selection methods in multidimensional computerized adaptive testing. In D. J. Weiss (Ed.), Proceedings of the 2009 GMAC Conference on Computerized Adaptive Testing. Retrieved [13.03.2021] from https://www.psych.umn.edu/psylabs/CATCentral
  2. Haberman, S. J. & von Davier, A. A. (2014). Considerations on parameter estimation, scoring, and linking in multistage testing. In D. Yan, A. A. von-Davier & C. Lewis (Eds.), Computerized multistage testing (p. 229 – 246). CRC Press; Taylor&Francis Group.
  3. Harwell, M., Stone, C. A., Hsu, T., & Kirisci, L. (1996). Monte Carlo studies in item response theory. Applied Psychological Measurement, 20(2), 101–125.
  4. Hendrickson, A. (2007). An NCME instructional module on multistage testing. Educational Measurement: Issues and Practice, Summer 2007, 44-52.
  5. Kim, S., Moses, T., & Yoo, H. (2015). A comparison of IRT proficiency estimation methods under adaptive multistage testing. Journal of Educational Measurement, 52(1), 70-79.
  6. Lord, F. M. (1980). Applications of item response theory to practical testing problems. Mahwah, New Jersey: Routledge
  7. Magis, D., Yan, D. & von-Davier, A. (Eds.). (2017). Computerized adaptive and multistage testing with R: Using packages catR and mstR. Springer.
  8. Magis, D., Yan, D. & von Davier, A., A. (2018). Package ‘mstR’: Procedures to generate patterns under multistage testing. Retrieved from https://cran.microsoft.com/snapshot/2018-09-29/web/packages/mstR/mstR.pdf
  9. OECD (2013). Technical report of the survey of adult skills (PIAAC). OECD Publishing: Paris, France.
  10. Sarı, H. İ. (2016). Examining content control in adaptive tests: Computerized adaptive testing vs. computerized multistage testing. Unpublished doctoral dissertation. University of Florida, USA.
APA
Erdem Kara, B. (2022). Effect of Routing Methods on the Performance of Multi-Stage Tests. International Journal of Turkish Education Sciences, 2022(19), 343-354. https://doi.org/10.46778/goputeb.1123902