Araştırma Makalesi
BibTex RIS Kaynak Göster

Investigation of the effect of parameter estimation and classification accuracy in mixture IRT models under different conditions

Yıl 2022, Cilt: 9 Sayı: 4, 1013 - 1029, 22.12.2022
https://doi.org/10.21449/ijate.1164590

Öz

This study aims to examine the effects of mixture item response theory (IRT) models on item parameter estimation and classification accuracy under different conditions. The manipulated variables of the simulation study are set as mixture IRT models (Rasch, 2PL, 3PL); sample size (600, 1000); the number of items (10, 30); the number of latent classes (2, 3); missing data type (complete, missing at random (MAR) and missing not at random (MNAR)), and the percentage of missing data (10%, 20%). Data were generated for each of the three mixture IRT models using the code written in R program. MplusAutomation package, which provides the automation of R and Mplus program, was used to analyze the data. The mean RMSE values for item difficulty, item discrimination, and guessing parameter estimation were determined. The mean RMSE values as to the Mixture Rasch model were found to be lower than those of the Mixture 2PL and Mixture 3PL models. Percentages of classification accuracy were also computed. It was noted that the Mixture Rasch model with 30 items, 2 classes, 1000 sample size, and complete data conditions had the highest classification accuracy percentage. Additionally, a factorial ANOVA was used to evaluate each factor's main effects and interaction effects.

Kaynakça

  • Alexeev, N., Templin, J., & Cohen, A.S. (2011). Spurious latent classes in the mixture Rasch model. Journal of Educational Measurement, 48, 313–332.
  • Cohen, J. (1988). Statistical power analysis for the behavioural sciences (2nd ed.). Academic.
  • Cohen, A.S., & Bolt, D.M. (2005). A mixture model analysis of differential item functioning. Journal of Educational Measurement, 42,133–148.
  • Cho, S.-J., Cohen, A.S., & Kim, S.-H. (2013). Markov chain Monte Carlo estimation of a mixture item response theory model. Journal of Statistical Computation and Simulation, 83, 278–306. https://doi.org/10.1080/00949655.2011.603090
  • Cho, H.J., Lee, J., & Kingston, N. (2012). Examining the effectiveness of test accommodation using DIF and a mixture IRT model. Applied Measurement in Education, 25(4), 281–304. https://doi.org/10.1080/08957347.2012.714682
  • Cho, S.-J., Cohen, A.S., & Kim, S.-H. (2013). Markov chain Monte Carlo estimation of a mixture Rasch model. Journal of Statistical Computation and Simulation, 83, 278–306. https://doi.org/10.1080/00949655.2011.603090
  • Choi Y.J., & Cohen, A.S. (2020). Comparison of scale identification methods in Mixture IRT models. Journal of Modern Applied Statistical Methods, 18(1), eP2971. https://doi.org/10.22237/jmasm/1556669700
  • Collins, L.M., & Lanza, S.T. (2010). Latent class and latent transition analysis. John Wiley & Sons.
  • De Ayala, R.J., Plake, B.S. & Impara, J.C. (2001). The impact of omitted responses on the accuracy of ability estimation in item response theory. Journal of Educational Measurement,38, 213–234. https://doi.org/10.1111/j.1745-3984.2001.tb01124.x
  • De Ayala, R.J. & Santiago, S.Y. (2017). An introduction to mixture item response theory models. Journal of School Psychology, 60, 25 40. https://doi.org/10.1016/j.jsp.2016.01.002
  • Edwards, J.M., & Finch, W.H. (2018). Recursive partitioning methods for data imputation in the context of item response theory: A Monte Carlo simulation. Psicológica, 39(1), 88-117. https://doi.org/10.2478/psicolj-2018-0005
  • Finch, H. (2008). Estimation of item response theory parameters in the presence of missing data. Journal of Educational Measurement, 45(3), 225 245. https://doi.org/10.1111/j.1745-3984.2008.00062.x
  • Finch, W.H., & French, B.F. (2012). Parameter estimation with mixture item response theory models: A monte carlo comparison of maximum likelihood and bayesian methods. Journal of Modern Applied Statistical Methods, 11(1), 167-178.
  • Hallquist, M.N., & Wiley, J.F. (2018). MplusAutomation: An R package for facilitating large scale latent variable analyses in Mplus. Structural Equation Modeling: A Multidisciplinary Journal, 25(4), 621 638. https://doi.org/10.1080/10705511.2017.1402334
  • Hambleton, R.K., Swaminathan, H., & Rogers, H.J. (1991). Fundamentals of Item Response Theory. Sage.
  • Hohensinn, C., & Kubinger, K.D. (2011). Applying item response theory methods to examine the impact of different response formats. Educational and Psychological Measurement, 71, 732-746. https://doi.org/10.1177/0013164410390032
  • Jilke, S., Meuleman, B., & Van de Walle, S. (2015). We need to compare, but how? Measurement equivalence in comparative public administration. Public Administration Review, 75(1), 36–48. https://doi.org/10.1111/puar.12318
  • Kolen, M.J., & Brennan, R.L. (2004). Test equating, scaling, and linking: Methods and practices. Springer.
  • Kutscher, T., Eid, M., & Crayen, C. (2019). Sample size requirements for applying mixed polytomous item response models: Results of a Monte Carlo simulation study. Frontiers in Psychology, 10, 2494. https://doi.org/10.3389/fpsyg.2019.02494
  • Lee, S. (2012). The Impact of Missıng Data on The Dichotomous Mixture IRT Models [Unpublished Doctoral Dissertation]. The University of Georgia.
  • Lee, S., Han, S., & Choi, S.W. (2021). DIF detection with zero-inflation under the factor mixture modeling framework. Educational and Psychological Measurement. https://doi.org/10.1177/00131644211028995
  • Li, F., Cohen, A.S., Kim, S.-H., & Cho, S.-J. (2009). Model selection methods for mixture dichotomous MTK models. Applied Psychological Measurement, 33, 353-373. https://doi.org/10.1177/0146621608326422
  • Little, R.J.A., & Rubin, D.B. (1987). Statistical analysis with missing data. Wiley.
  • Maij-de Meij, A.M., Kelderman, H., & van der Flier, H. (2008). Fitting a mixture item response theory model to personality questionnaire data: Characterizing latent classes and investigating possibilities for improving prediction. Applied Psychological Measurement, 32, 611-631. https://doi.org/10.1177/0146621607312613
  • Muthén, L.K. & Muthén, B.O. (1998-2017). Mplus User’s Guide. Eighth Edition. Los Angeles, CA: Muthén & Muthén.
  • Norouzian, R., & Plonsky, L. (2018). Eta- and partial eta-squared in L2 research: A cautionary review and guide to more appropriate usage. Second Language Research, 34, 257–271. https://doi.org/10.1177/0267658316684904
  • Oliveri, M.E., Ercikan, K., Zumbo, B.D., & Lawless, R. (2014). Uncovering substantive patterns in student responses in international large-scale assessments-Comparing a latent class to a manifest DIF approach. International Journal of Testing, 14(3), 265-287. https://doi.org/10.1080/15305058.2014.891223
  • Park, Y.S., Lee, Y.-S., & Xing, K. (2016). Investigating the impact of item parameter drift for item response theory models with mixture distributions. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2016.00255
  • Pohl, S., Grafe, L., & Rose, N. (2014). Dealing with omitted and not-reached items in competence tests: Evaluating approaches accounting for missing responses in item response theory models. Educational and Psychological Measurement, 74(3), 423–452. https://doi.org/10.1177/0013164413504926
  • Preinerstorfer, D., & Formann, A.K. (2012). Parameter recovery and model selection in mixed Rasch models. British Journal of Mathematical and Statistical Psychology, 65, 251–262. https://doi.org/10.1111/j.2044-8317.2011.02020.x
  • R Core Team (2020). R: A language and environment for statistical computing. R foundation for statistical computing, Vienna, Austria. https://www.R-project.org/
  • Richardson, J.T. (2011). Eta squared and partial eta squared as measures of effect size in educational research. Educational Research Review 6, 135 47. https://doi.org/10.1016/j.edurev.2010.12.001
  • Rost, J. (1990). Rasch Models in Latent Classes: An integration of two approaches to item analysis. Applied Psychological Measurement, 14(3), 271–282.
  • Rost, J., Carstensen, C., & von Davier, M. (1997). Applying the mixed Rasch model to personality questionnaires. In J. Rost, & R. Langeheine (Eds.), Applications of latent trait and latent class models in the social sciences (pp. 324-332). Waxmann.
  • Sen, S. (2014). Robustness of mixture IRT models to violations of latent normality. [Unpublished Doctoral Dissertation]. The University of Georgia.
  • Sen, S., Choen, A.S., & Kim, S.H. (2016). The impact of non-normality on extraction of spurious latent classes in mixture IRT models. Applied Psychological Measurement, 40(2), 98-113. https://doi.org/10.1177/0146621615605080
  • Sen, S., & Cohen, A.S. (2019). Applications of mixture IRT models: A literature review, Measurement: Interdisciplinary Research and Perspectives, 17(4), 177 191. https://doi.org/10.1080/15366367.2019.1583506
  • Zhang, D., Orrill, C., & Campbell, T. (2015). Using the mixture Rasch model to explore knowledge resources students invoke in mathematics and science assessments. School Science and Mathematics, 115(7), 356-365. https://doi.org/10.1111/ssm.12135

Investigation of the effect of parameter estimation and classification accuracy in mixture IRT models under different conditions

Yıl 2022, Cilt: 9 Sayı: 4, 1013 - 1029, 22.12.2022
https://doi.org/10.21449/ijate.1164590

Öz

This study aims to examine the effects of mixture item response theory (IRT) models on item parameter estimation and classification accuracy under different conditions. The manipulated variables of the simulation study are set as mixture IRT models (Rasch, 2PL, 3PL); sample size (600, 1000); the number of items (10, 30); the number of latent classes (2, 3); missing data type (complete, missing at random (MAR) and missing not at random (MNAR)), and the percentage of missing data (10%, 20%). Data were generated for each of the three mixture IRT models using the code written in R program. MplusAutomation package, which provides the automation of R and Mplus program, was used to analyze the data. The mean RMSE values for item difficulty, item discrimination, and guessing parameter estimation were determined. The mean RMSE values as to the Mixture Rasch model were found to be lower than those of the Mixture 2PL and Mixture 3PL models. Percentages of classification accuracy were also computed. It was noted that the Mixture Rasch model with 30 items, 2 classes, 1000 sample size, and complete data conditions had the highest classification accuracy percentage. Additionally, a factorial ANOVA was used to evaluate each factor's main effects and interaction effects.

Kaynakça

  • Alexeev, N., Templin, J., & Cohen, A.S. (2011). Spurious latent classes in the mixture Rasch model. Journal of Educational Measurement, 48, 313–332.
  • Cohen, J. (1988). Statistical power analysis for the behavioural sciences (2nd ed.). Academic.
  • Cohen, A.S., & Bolt, D.M. (2005). A mixture model analysis of differential item functioning. Journal of Educational Measurement, 42,133–148.
  • Cho, S.-J., Cohen, A.S., & Kim, S.-H. (2013). Markov chain Monte Carlo estimation of a mixture item response theory model. Journal of Statistical Computation and Simulation, 83, 278–306. https://doi.org/10.1080/00949655.2011.603090
  • Cho, H.J., Lee, J., & Kingston, N. (2012). Examining the effectiveness of test accommodation using DIF and a mixture IRT model. Applied Measurement in Education, 25(4), 281–304. https://doi.org/10.1080/08957347.2012.714682
  • Cho, S.-J., Cohen, A.S., & Kim, S.-H. (2013). Markov chain Monte Carlo estimation of a mixture Rasch model. Journal of Statistical Computation and Simulation, 83, 278–306. https://doi.org/10.1080/00949655.2011.603090
  • Choi Y.J., & Cohen, A.S. (2020). Comparison of scale identification methods in Mixture IRT models. Journal of Modern Applied Statistical Methods, 18(1), eP2971. https://doi.org/10.22237/jmasm/1556669700
  • Collins, L.M., & Lanza, S.T. (2010). Latent class and latent transition analysis. John Wiley & Sons.
  • De Ayala, R.J., Plake, B.S. & Impara, J.C. (2001). The impact of omitted responses on the accuracy of ability estimation in item response theory. Journal of Educational Measurement,38, 213–234. https://doi.org/10.1111/j.1745-3984.2001.tb01124.x
  • De Ayala, R.J. & Santiago, S.Y. (2017). An introduction to mixture item response theory models. Journal of School Psychology, 60, 25 40. https://doi.org/10.1016/j.jsp.2016.01.002
  • Edwards, J.M., & Finch, W.H. (2018). Recursive partitioning methods for data imputation in the context of item response theory: A Monte Carlo simulation. Psicológica, 39(1), 88-117. https://doi.org/10.2478/psicolj-2018-0005
  • Finch, H. (2008). Estimation of item response theory parameters in the presence of missing data. Journal of Educational Measurement, 45(3), 225 245. https://doi.org/10.1111/j.1745-3984.2008.00062.x
  • Finch, W.H., & French, B.F. (2012). Parameter estimation with mixture item response theory models: A monte carlo comparison of maximum likelihood and bayesian methods. Journal of Modern Applied Statistical Methods, 11(1), 167-178.
  • Hallquist, M.N., & Wiley, J.F. (2018). MplusAutomation: An R package for facilitating large scale latent variable analyses in Mplus. Structural Equation Modeling: A Multidisciplinary Journal, 25(4), 621 638. https://doi.org/10.1080/10705511.2017.1402334
  • Hambleton, R.K., Swaminathan, H., & Rogers, H.J. (1991). Fundamentals of Item Response Theory. Sage.
  • Hohensinn, C., & Kubinger, K.D. (2011). Applying item response theory methods to examine the impact of different response formats. Educational and Psychological Measurement, 71, 732-746. https://doi.org/10.1177/0013164410390032
  • Jilke, S., Meuleman, B., & Van de Walle, S. (2015). We need to compare, but how? Measurement equivalence in comparative public administration. Public Administration Review, 75(1), 36–48. https://doi.org/10.1111/puar.12318
  • Kolen, M.J., & Brennan, R.L. (2004). Test equating, scaling, and linking: Methods and practices. Springer.
  • Kutscher, T., Eid, M., & Crayen, C. (2019). Sample size requirements for applying mixed polytomous item response models: Results of a Monte Carlo simulation study. Frontiers in Psychology, 10, 2494. https://doi.org/10.3389/fpsyg.2019.02494
  • Lee, S. (2012). The Impact of Missıng Data on The Dichotomous Mixture IRT Models [Unpublished Doctoral Dissertation]. The University of Georgia.
  • Lee, S., Han, S., & Choi, S.W. (2021). DIF detection with zero-inflation under the factor mixture modeling framework. Educational and Psychological Measurement. https://doi.org/10.1177/00131644211028995
  • Li, F., Cohen, A.S., Kim, S.-H., & Cho, S.-J. (2009). Model selection methods for mixture dichotomous MTK models. Applied Psychological Measurement, 33, 353-373. https://doi.org/10.1177/0146621608326422
  • Little, R.J.A., & Rubin, D.B. (1987). Statistical analysis with missing data. Wiley.
  • Maij-de Meij, A.M., Kelderman, H., & van der Flier, H. (2008). Fitting a mixture item response theory model to personality questionnaire data: Characterizing latent classes and investigating possibilities for improving prediction. Applied Psychological Measurement, 32, 611-631. https://doi.org/10.1177/0146621607312613
  • Muthén, L.K. & Muthén, B.O. (1998-2017). Mplus User’s Guide. Eighth Edition. Los Angeles, CA: Muthén & Muthén.
  • Norouzian, R., & Plonsky, L. (2018). Eta- and partial eta-squared in L2 research: A cautionary review and guide to more appropriate usage. Second Language Research, 34, 257–271. https://doi.org/10.1177/0267658316684904
  • Oliveri, M.E., Ercikan, K., Zumbo, B.D., & Lawless, R. (2014). Uncovering substantive patterns in student responses in international large-scale assessments-Comparing a latent class to a manifest DIF approach. International Journal of Testing, 14(3), 265-287. https://doi.org/10.1080/15305058.2014.891223
  • Park, Y.S., Lee, Y.-S., & Xing, K. (2016). Investigating the impact of item parameter drift for item response theory models with mixture distributions. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2016.00255
  • Pohl, S., Grafe, L., & Rose, N. (2014). Dealing with omitted and not-reached items in competence tests: Evaluating approaches accounting for missing responses in item response theory models. Educational and Psychological Measurement, 74(3), 423–452. https://doi.org/10.1177/0013164413504926
  • Preinerstorfer, D., & Formann, A.K. (2012). Parameter recovery and model selection in mixed Rasch models. British Journal of Mathematical and Statistical Psychology, 65, 251–262. https://doi.org/10.1111/j.2044-8317.2011.02020.x
  • R Core Team (2020). R: A language and environment for statistical computing. R foundation for statistical computing, Vienna, Austria. https://www.R-project.org/
  • Richardson, J.T. (2011). Eta squared and partial eta squared as measures of effect size in educational research. Educational Research Review 6, 135 47. https://doi.org/10.1016/j.edurev.2010.12.001
  • Rost, J. (1990). Rasch Models in Latent Classes: An integration of two approaches to item analysis. Applied Psychological Measurement, 14(3), 271–282.
  • Rost, J., Carstensen, C., & von Davier, M. (1997). Applying the mixed Rasch model to personality questionnaires. In J. Rost, & R. Langeheine (Eds.), Applications of latent trait and latent class models in the social sciences (pp. 324-332). Waxmann.
  • Sen, S. (2014). Robustness of mixture IRT models to violations of latent normality. [Unpublished Doctoral Dissertation]. The University of Georgia.
  • Sen, S., Choen, A.S., & Kim, S.H. (2016). The impact of non-normality on extraction of spurious latent classes in mixture IRT models. Applied Psychological Measurement, 40(2), 98-113. https://doi.org/10.1177/0146621615605080
  • Sen, S., & Cohen, A.S. (2019). Applications of mixture IRT models: A literature review, Measurement: Interdisciplinary Research and Perspectives, 17(4), 177 191. https://doi.org/10.1080/15366367.2019.1583506
  • Zhang, D., Orrill, C., & Campbell, T. (2015). Using the mixture Rasch model to explore knowledge resources students invoke in mathematics and science assessments. School Science and Mathematics, 115(7), 356-365. https://doi.org/10.1111/ssm.12135
Toplam 38 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Alan Eğitimleri
Bölüm Makaleler
Yazarlar

Fatıma Münevver Saatçioğlu 0000-0003-4797-207X

Hakan Yavuz Atar 0000-0001-5372-1926

Yayımlanma Tarihi 22 Aralık 2022
Gönderilme Tarihi 19 Ağustos 2022
Yayımlandığı Sayı Yıl 2022 Cilt: 9 Sayı: 4

Kaynak Göster

APA Saatçioğlu, F. M., & Atar, H. Y. (2022). Investigation of the effect of parameter estimation and classification accuracy in mixture IRT models under different conditions. International Journal of Assessment Tools in Education, 9(4), 1013-1029. https://doi.org/10.21449/ijate.1164590

23824         23823             23825