Low-stakes assessments, commonly used in higher education and large-scale K-12 testing, face unique challenges in score interpretation due to disengagement during testing. The IRTree Model for Disengagement, which can be applied with various indicators of disengagement at the item level, addresses this issue by accounting for disengaged responses, thereby improving the accuracy of score interpretations. This study examines the effectiveness of this IRTree model by evaluating its accuracy in estimating parameters across different conditions, including variations of sample size, test length, prior distributions, and the correlation between latent traits. Using a 2x2x2x3 simulation design, we found that the IRTree model effectively recovers person and item parameters, with factors like test length and prior distributions moderately influencing accuracy. While larger sample sizes and matched priors enhance parameter recovery, non-matched priors also perform adequately. Given the model to estimates two latent traits—disengagement and the trait of interest—simultaneously, estimation can leverage their relationship, reducing parameter bias and increasing parameter coverage rates. This showcases that using this model leads to more valid interpretations of test scores, particularly in low-stakes contexts where disengagement might otherwise skew results. The results underscore the IRTree model's use in educational assessment, improving the validity of score interpretations in low-stakes testing environments, regardless of the disengagement indicator used.
| Primary Language | English |
|---|---|
| Subjects | Item Response Theory, Modelling, Testing, Assessment and Psychometrics (Other) |
| Journal Section | Articles |
| Authors | |
| Publication Date | September 30, 2025 |
| Submission Date | March 21, 2025 |
| Acceptance Date | September 30, 2025 |
| Published in Issue | Year 2025 Volume: 16 Issue: 3 |