Research Article

Explanatory Item Response Models for Polytomous Item Responses

Volume: 6 Number: 2 July 15, 2019
TR EN

Explanatory Item Response Models for Polytomous Item Responses

Abstract

Item response theory is a widely used framework for the design, scoring, and scaling of measurement instruments. Item response models are typically used for dichotomously scored questions that have only two score points (e.g., multiple-choice items). However, given the increasing use of instruments that include questions with multiple response categories, such as surveys, questionnaires, and psychological scales, polytomous item response models are becoming more utilized in education and psychology. This study aims to demonstrate the application of explanatory item response models to polytomous item responses in order to explain common variability in item clusters, person groups, and interactions between item clusters and person groups. Explanatory forms of several polytomous item response models – such as Partial Credit Model and Rating Scale Model – are demonstrated and the estimation procedures of these models are explained. Findings of this study suggest that explanatory item response models can be more robust and parsimonious than traditional item response models for polytomous data where items and persons share common characteristics. Explanatory polytomous item response models can provide more information about response patterns in item responses by estimating fewer item parameters.

Keywords

References

  1. Albano, A. D. (2013). Multilevel modeling of item position effects. Journal of Educational Measurement, 50(4), 408–426. doi:10.1111/jedm.12026
  2. Adams, R. J., Wu, M. L., & Wilson, M. (2012). The Rasch rating model and the disordered threshold controversy. Educational and Psychological Measurement, 72(4), 547–573. doi: 10.1177/0013164411432166
  3. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education, & Joint Committee on Standards for Educational and Psychological Testing. (2014). Standards for educational and psychological testing. Washington, DC: AERA.
  4. Andrich, D. (1978). Application of a psychometric rating model to ordered categories which are scored with successive integers. Applied Psychological Measurement, 2(4) 581–594. doi:10.1177/014662167800200413
  5. Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6), 716–723. doi:10.1109/TAC.1974.1100705
  6. Bates, D., Maechler, M., Bokler, B., & Walker, S. (2014). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. doi:10.18637/jss.v067.i01
  7. Beretvas, S. N. (2008). Cross-classified random effects models. In A. A. O’Connell & D. Betsy McCoach (Eds.), Multilevel modeling of educational data (pp. 161-197). Charlotte, SC: Information Age Publishing.
  8. Birnbaum, A. (1968). Some latent trait models and their use in inferring an examinee’s ability. In F. M. Lord & M. R. Novick (Eds.), Statistical theories of mental test scores. Reading, MA: Addison–Wesley.

Details

Primary Language

English

Subjects

Studies on Education

Journal Section

Research Article

Publication Date

July 15, 2019

Submission Date

January 19, 2019

Acceptance Date

May 15, 2019

Published in Issue

Year 2019 Volume: 6 Number: 2

APA
Stanke, L., & Bulut, O. (2019). Explanatory Item Response Models for Polytomous Item Responses. International Journal of Assessment Tools in Education, 6(2), 259-278. https://doi.org/10.21449/ijate.515085

Cited By

23823             23825             23824