Research Article
BibTex RIS Cite

A Guide for More Accurate and Precise Estimations in Simulative Unidimensional IRT Models

Year 2021, , 423 - 453, 10.06.2021
https://doi.org/10.21449/ijate.790289

Abstract

There is a great deal of research about item response theory (IRT) conducted by simulations. Item and ability parameters are estimated with varying numbers of replications under different test conditions. However, it is not clear what the appropriate number of replications should be. The aim of the current study is to develop guidelines for the adequate number of replications in conducting Monte Carlo simulation studies involving unidimensional IRT models. For this aim, 192 simulation conditions which included four sample sizes, two test lengths, eight replication numbers, and unidimensional IRT models were generated. Accuracy and precision of item and ability parameter estimations and model fit values were evaluated by considering the number of replications. In this context, for the item and ability parameters; mean error, root mean square error, standard error of estimates, and for model fit; M_2, 〖RMSEA〗_2, and Type I error rates were considered. The number of replications did not seem to influence the model fit, it was decisive in Type I error inflation and error prediction accuracy for all IRT models. It was concluded that to get more accurate results, the number of replications should be at least 625 in terms of accuracy of the Type I error rate estimation for all IRT models. Also, 156 replications and above can be recommended. Item parameter biases were examined, and the largest bias values were obtained from the 3PL model. It can be concluded that the increase in the number of parameters estimated by the model resulted in more biased estimates.

References

  • Ames, A. J., Leventhal, B. C., & Ezike, N. C. (2020). Monte Carlo simulation in item response theory applications using SAS. Measurement: Interdisciplinary Research and Perspectives, 18(2), 55-74. https://doi.org/10.1080/15366367.2019.1689762
  • Babcock, B. (2011). Estimating a noncompensatory IRT model using Metropolis within Gibbs sampling. Applied Psychological Measurement, 35(4), 317 329. http://dx.doi.org/10.1177/0146621610392366
  • Bahry, L. M. (2012). Polytomous item response theory parameter recovery: an investigation of nonnormal distributions and small sample size [Master’s Thesis]. ProQuest Dissertations and Theses Global.
  • Baker, F. B. (1998). An investigation of the item parameter recovery characteristics of a Gibbs sampling procedure. Applied Psychological Measurement, 22(2), 153-169. https://doi.org/10.1177/01466216980222005
  • Baldwin, P. (2011). A strategy for developing a common metric in item response theory when parameter posterior distributions are known. Journal of Educational Measurement, 48(1), 1-11. Retrieved December 9, 2020, from http://www.jstor.org/stable/23018061
  • Barış Pekmezci, F., & Gülleroğlu, H. (2019). Investigation of the orthogonality assumption in the bifactor item response theory. Eurasian Journal of Educational Research, 19(79), 69-86. http://dx.doi.org/10.14689/ejer.2019.79.4
  • Bulut, O., & Sünbül, Ö. (2017). Monte Carlo simulation studies in item response theory with the R programming language. Journal of Measurement and Evaluation in Education and Psychology, 8(3), 266-287. https://doi.org/10.21031/epod.305821
  • Cai, L., & Thissen, D. (2014). Modern Approaches to Parameter Estimation in Item Response Theory from: Handbook of Item Response Theory Modeling, Applications to Typical Performance Assessment. Routledge.
  • Chalmers, R. P. (2012). Mirt: A multidimensional item response theory package for the R environment. Journal of Statistical Software, 48(6), 1 29. https://doi.org/10.18637/JSS.V048.I06
  • Chuah, S. C., Drasgow, F., & Luecht, R. (2006). How big is big enough? Sample size requirements for CAST item parameter estimation. Applied Measurement in Education, 19(3), 241-255. https://doi.org/10.1207/s15324818ame1903_5
  • Cohen, A. S., Kim, S. H., & Baker, F. B. (1993). Detection of differential item functioning in the graded response model. Applied Psychological Measurement, 17(4), 335-350. https://doi.org/10.1177/014662169301700402
  • Crişan, D. R., Tendeiro, J. N., & Meijer, R. R. (2017). Investigating the practical consequences of model misfit in unidimensional IRT models. Applied Psychological Measurement, 41(6), 439-455. https://doi.org/10.1177/0146621617695522
  • Crocker, L., & Algina, J. (1986). Introduction to classical and modern test theory. Orlando: Harcourt Brace Jovanovich Inc.
  • De Ayala, R. J. (2009). The theory and practice of item response theory. New York, NY: Guilford Press.
  • De La Torre, J., & Patz, R. J. (2005). Making the most of what we have: A practical application of multidimensional item response theory in test scoring. Journal of Educational and Behavioral Statistics, 30(3), 295-311. https://www.jstor.org/stable/3701380
  • DeMars, C. E. (2002, April). Recovery of graded response and partial credit parameters in multilog and parscale. Annual meeting of American Educational Research Association, Chicago. https://commons.lib.jmu.edu/cgi/viewcontent.cgi?article=1034&context=gradpsych
  • Feinberg, R. A., & Rubright, J. D. (2016). Conducting simulation studies in psychometrics. Educational Measurement: Issues and Practice, 35(2), 36-49. https://doi.org/10.1111/emip.12111
  • Fu, J. (2019). Maximum marginal likelihood estimation with an expectation–maximization algorithm for multigroup/mixture multidimensional item response theory models (No. RR-19-35). ETS Research Report Series, https://doi.org/10.1002/ets2.12272
  • Gao, F., & Chen, L. (2005). Bayesian or non-Bayesian: A comparison study of item parameter estimation in the three-parameter logistic model. Applied Measurement in Education, 18(4), 351-380. https://doi.org/10.1207/s15324818ame1804_2
  • Glass, G. V., Peckham, P. D., & Sanders, J. R. (1972). Consequences of failure to meet assumptions underlying the fixed effects analyses of variance and covariance. Review of Educational Research, 42(3), 237-288. https://doi.org/10.3102/00346543042003237
  • Goldman, S. H., & Raju, N. S. (1986). Recovery of one-and two-parameter logistic item parameters: An empirical study. Educational and Psychological Measurement, 46(1), 11-21. https://doi.org/10.1177/0013164486461002
  • Hair, J. F., Black W. C., Babin, B. J., & Anderson, R. E. (2019). Multivariate data analysis. (8th edition). Annabel Ainscow.
  • Han, K. T. (2007). WinGen: windows software that generates irt parameters and item responses. Applied Psychological Measurement, 31(5), 457 459. https://doi.org/10.1177/0146621607299271
  • Hanson, B. A. (1998, October). IRT parameter estimation using the EM algorithm. http://www.b-a-h.com/papers/note9801.pdf
  • Harwell, M. (1997). Analyzing the results of monte carlo studies in item response theory. Educational and Psychological Measurement, 57(2), 266-279. https://doi.org/10.1177/0013164497057002006
  • Harwell, M. R., & Baker, F. B. (1991). The use of prior distributions in marginalized Bayesian item parameter estimation: A didactic. Applied Psychological Measurement, 15(4), 375–389. https://doi.org/10.1177/014662169101500409
  • Harwell, M., Stone, C. A., Hsu, T. C., & Kirisci, L. (1996). Monte carlo studies in item response theory. Applied Psychological Measurement, 20(2), 101 125. https://doi.org/10.1177/014662169602000201
  • Hulin, C. L., Lissak, R. I., & Drasgow, F. (1982). Recovery of two and three-parameter logistic item characteristic curves: A Monte Carlo study. Applied Psychological Measurement, 6(3), 249–260. https://doi.org/10.1177/014662168200600301
  • Jiang, S., Wang, C., & Weiss, D. J. (2016). Sample size requirements for estimation of item parameters in the multidimensional graded response model. Frontiers in psychology, 7, 109. https://doi.org/10.3389/fpsyg.2016.00109
  • Kirisci, L., Hsu, T. C., & Yu, L. (2001). Robustness of item parameter estimation programs to assumptions of unidimensionality and normality. Applied Psychological Measurement, 25(2), 146-162. https://doi.org/10.1177/01466210122031975
  • Kleijnen, J. P. (1987). Statistical tools for simulation practitioners. Marcel Dekker.
  • Lee, S., Bulut, O., & Suh, Y. (2017). Multidimensional extension of multiple indicators multiple causes models to detect DIF. Educational and Psychological Measurement, 77(4), 545–569. https://doi.org/10.1177/0013164416651116
  • Lord, F. M. (1968). An analysis of the verbal scholastic aptitude test using Birnbaum's three-parameter logistic model. Educational and Psychological Measurement, 28(4), 989-1020. https://doi.org/10.1177/001316446802800401
  • Matlock, K. L., & Turner, R. (2016). Unidimensional IRT item parameter estimates across equivalent test forms with confounding specifications within dimensions. Educational and Psychological Measurement, 76(2), 258 279. https://doi.org/10.1177/0013164415589756
  • Matlock Cole, K., & Paek, I. (2017). PROC IRT: A SAS procedure for item response theory. Applied Psychological Measurement, 41(4), 311 320. https://doi.org/10.1177/0146621616685062
  • McDonald, R. P. (1982). Linear Versus Models in Item Response Theory. Applied Psychological Measurement, 6(4), 379 396. https://doi.org/10.1177/014662168200600402
  • Mislevy, R. J., & Stocking, M. L. (1989). A consumer's guide to LOGIST and BILOG. Applied Psychological Measurement, 13(1), 57 75. https://doi.org/10.1177/014662168901300106
  • Mooney, C. Z. (1997). Monte Carlo simulation. Thousand Oaks, CA: Sage.
  • Mundform, D. J., Schaffer, J., Kim, M. J., Shaw, D., Thongteeraparp, A., & Supawan, P. (2011). Number of replications required in Monte Carlo simulation studies: A synthesis of four studies. Journal of Modern Applied Statistical Methods, 10(1), 19-28. https://doi.org/10.22237/jmasm/1304222580
  • Park, Y. S., Lee, Y. S., & Xing, K. (2016). Investigating the impact of item parameter drift for item response theory models with mixture distributions. Frontiers in Psychology, 7, 255. https://doi.org/10.3389/fpsyg.2016.00255
  • Patsias, K., Sheng, Y., & Rahimi, S. (2009, September 24-26). A high performance Gibbs sampling algorithm for item response theory. 22nd International Conference on Parallel and Distributed Computing and Communication Systems, Kentucky, USA.
  • Patsula, L. N., & Gessaroli, M. E. A (1995, April). Comparison of item parameter estimates and iccs produced. https://files.eric.ed.gov/fulltext/ED414333.pdf
  • Preecha, C. (2004). Numbers of replications required in ANOVA simulation studies [Doctoral dissertation, University of Northern Colorado]. ProQuest Dissertations and Theses Global.
  • Reise, S. P., & Yu, J. (1990). Parameter recovery in the graded response model using MULTILOG. Journal of Educational Measurement, 27(2), 133 144. https://www.jstor.org/stable/1434973
  • Reise, S., Moore, T., & Maydeu-Olivares, A. (2011). Target rotations and assessing the impact of model violations on the parameters of unidimensional item response theory models. Educational and Psychological Measurement, 71(4), 684-711. https://doi.org/10.1177/0013164410378690
  • Roberts, J. S., Donoghue, J. R., & Laughlin, J. E. (2002). Characteristics of MML/EAP parameter estimates in the generalized graded unfolding model. Applied Psychological Measurement, 26(2), 192-207. https://doi.org/10.1177/01421602026002006
  • Rubinstein, R. Y. (1981). Simulation and the Monte Carlo method. John Wiley and Sons, New York. https://doi.org/10.1002/9780470316511
  • Sahin, A., & Anil, D. (2017). The effects of test length and sample size on item parameters in item response theory. Educational Sciences: Theory & Practice, 17(1), 321-33. https://doi.org/10.12738/estp.2017.1.0270
  • Sarkar, D. (2008). Lattice: multivariate data visualization with R. Springer, New York.
  • Schumacker, R. E, Smith, R. M., & Bush, J. M. (1994, April). Examining replication effects in Rasch fit statistics. American Educational Research Association Annual Meeting, New Orleans.
  • Sen, S., Cohen, A. S., & Kim, S. H. (2016). The impact of non-normality on extraction of spurious latent classes in mixture IRT models. Applied Psychological Measurement, 40(2), 98-113. https://doi.org/10.1177/0146621615605080
  • Sheng, Y., & Wikle, C. K. (2007). Comparing multiunidimensional and unidimensional item response theory models. Educational and Psychological Measurement, 67(6), 899-919. https://doi.org/10.1177/0013164406296977
  • Tavares, H. R., Andrade, D. F. D., & Pereira, C. A. D. B. (2004). Detection of determinant genes and diagnostic via item response theory. Genetics and Molecular Biology, 27(4), 679-685. https://doi.org/10.1590/S1415-47572004000400033
  • Thissen, D., & Wainer, H. (1982). Some standard errors in item response theory. Psychometrika, 47(4), 397-412. https://doi.org/10.1007/BF02293705
  • Thompson, B. (2004). Exploratory and confirmatory factor analysis. Amer Psychological Assn.
  • Thompson, B. (2006). Foundations of behavioral statistics: An insight-based approach. Guilford Press.
  • Thompson, N. A. (2009). Ability estimation with item response theory. Assessment Systems Corporation. https://assess.com/docs/Thompson_(2009)_ _Ability_estimation_with_IRT.pdf
  • Şengül Avşar, A., & Tavşancıl, E. (2017). Examination of polytomous items’ psychometric properties according to nonparametric item response theory models in different test conditions. Educational Sciences: Theory & Practice, 17(2). https://doi.org/10.12738/estp.2017.2.0246
  • van der Linden, W. J. (Ed.). (2018). Handbook of item response theory, three volume set. CRC Press.
  • van Onna, M. J. H. (2004). Ordered latent class models in nonparametric item response theory. [Doctoral dissertation]. University of Groningen.
  • Weissman, A. (2013). Optimizing information using the EM algorithm in item response theory. Annals of Operations Research, 206(1), 627 646. https://doi.org/10.1007/s10479-012-1204-4
  • Yang, S. (2007). A comparison of unidimensional and multidimensional RASCH models using parameter estimates and fit indices when assumption of unidimensionality is violated [Doctoral dissertation, The Ohio State University]. ProQuest Dissertations and Theses Global.
  • Yen, W. M. (1987). A comparison of the efficiency and accuracy of BILOG and LOGIST. Psychometrika, 52(2), 275-291. https://doi.org/10.1007/BF02294241
  • Zhang, B. (2008). Application of unidimensional item response models to tests with items sensitive to secondary dimensions. The Journal of Experimental Education, 77(2), 147-166. https://doi.org/10.3200/JEXE.77.2.147-166

A Guide for More Accurate and Precise Estimations in Simulative Unidimensional IRT Models

Year 2021, , 423 - 453, 10.06.2021
https://doi.org/10.21449/ijate.790289

Abstract

There is a great deal of research about item response theory (IRT) conducted by simulations. Item and ability parameters are estimated with varying numbers of replications under different test conditions. However, it is not clear what the appropriate number of replications should be. The aim of the current study is to develop guidelines for the adequate number of replications in conducting Monte Carlo simulation studies involving unidimensional IRT models. For this aim, 192 simulation conditions which included four sample sizes, two test lengths, eight replication numbers, and unidimensional IRT models were generated. Accuracy and precision of item and ability parameter estimations and model fit values were evaluated by considering the number of replications. In this context, for the item and ability parameters; mean error, root mean square error, standard error of estimates, and for model fit; M_2, 〖RMSEA〗_2, and Type I error rates were considered. The number of replications did not seem to influence the model fit, it was decisive in Type I error inflation and error prediction accuracy for all IRT models. It was concluded that to get more accurate results, the number of replications should be at least 625 in terms of accuracy of the Type I error rate estimation for all IRT models. Also, 156 replications and above can be recommended. Item parameter biases were examined, and the largest bias values were obtained from the 3PL model. It can be concluded that the increase in the number of parameters estimated by the model resulted in more biased estimates.

References

  • Ames, A. J., Leventhal, B. C., & Ezike, N. C. (2020). Monte Carlo simulation in item response theory applications using SAS. Measurement: Interdisciplinary Research and Perspectives, 18(2), 55-74. https://doi.org/10.1080/15366367.2019.1689762
  • Babcock, B. (2011). Estimating a noncompensatory IRT model using Metropolis within Gibbs sampling. Applied Psychological Measurement, 35(4), 317 329. http://dx.doi.org/10.1177/0146621610392366
  • Bahry, L. M. (2012). Polytomous item response theory parameter recovery: an investigation of nonnormal distributions and small sample size [Master’s Thesis]. ProQuest Dissertations and Theses Global.
  • Baker, F. B. (1998). An investigation of the item parameter recovery characteristics of a Gibbs sampling procedure. Applied Psychological Measurement, 22(2), 153-169. https://doi.org/10.1177/01466216980222005
  • Baldwin, P. (2011). A strategy for developing a common metric in item response theory when parameter posterior distributions are known. Journal of Educational Measurement, 48(1), 1-11. Retrieved December 9, 2020, from http://www.jstor.org/stable/23018061
  • Barış Pekmezci, F., & Gülleroğlu, H. (2019). Investigation of the orthogonality assumption in the bifactor item response theory. Eurasian Journal of Educational Research, 19(79), 69-86. http://dx.doi.org/10.14689/ejer.2019.79.4
  • Bulut, O., & Sünbül, Ö. (2017). Monte Carlo simulation studies in item response theory with the R programming language. Journal of Measurement and Evaluation in Education and Psychology, 8(3), 266-287. https://doi.org/10.21031/epod.305821
  • Cai, L., & Thissen, D. (2014). Modern Approaches to Parameter Estimation in Item Response Theory from: Handbook of Item Response Theory Modeling, Applications to Typical Performance Assessment. Routledge.
  • Chalmers, R. P. (2012). Mirt: A multidimensional item response theory package for the R environment. Journal of Statistical Software, 48(6), 1 29. https://doi.org/10.18637/JSS.V048.I06
  • Chuah, S. C., Drasgow, F., & Luecht, R. (2006). How big is big enough? Sample size requirements for CAST item parameter estimation. Applied Measurement in Education, 19(3), 241-255. https://doi.org/10.1207/s15324818ame1903_5
  • Cohen, A. S., Kim, S. H., & Baker, F. B. (1993). Detection of differential item functioning in the graded response model. Applied Psychological Measurement, 17(4), 335-350. https://doi.org/10.1177/014662169301700402
  • Crişan, D. R., Tendeiro, J. N., & Meijer, R. R. (2017). Investigating the practical consequences of model misfit in unidimensional IRT models. Applied Psychological Measurement, 41(6), 439-455. https://doi.org/10.1177/0146621617695522
  • Crocker, L., & Algina, J. (1986). Introduction to classical and modern test theory. Orlando: Harcourt Brace Jovanovich Inc.
  • De Ayala, R. J. (2009). The theory and practice of item response theory. New York, NY: Guilford Press.
  • De La Torre, J., & Patz, R. J. (2005). Making the most of what we have: A practical application of multidimensional item response theory in test scoring. Journal of Educational and Behavioral Statistics, 30(3), 295-311. https://www.jstor.org/stable/3701380
  • DeMars, C. E. (2002, April). Recovery of graded response and partial credit parameters in multilog and parscale. Annual meeting of American Educational Research Association, Chicago. https://commons.lib.jmu.edu/cgi/viewcontent.cgi?article=1034&context=gradpsych
  • Feinberg, R. A., & Rubright, J. D. (2016). Conducting simulation studies in psychometrics. Educational Measurement: Issues and Practice, 35(2), 36-49. https://doi.org/10.1111/emip.12111
  • Fu, J. (2019). Maximum marginal likelihood estimation with an expectation–maximization algorithm for multigroup/mixture multidimensional item response theory models (No. RR-19-35). ETS Research Report Series, https://doi.org/10.1002/ets2.12272
  • Gao, F., & Chen, L. (2005). Bayesian or non-Bayesian: A comparison study of item parameter estimation in the three-parameter logistic model. Applied Measurement in Education, 18(4), 351-380. https://doi.org/10.1207/s15324818ame1804_2
  • Glass, G. V., Peckham, P. D., & Sanders, J. R. (1972). Consequences of failure to meet assumptions underlying the fixed effects analyses of variance and covariance. Review of Educational Research, 42(3), 237-288. https://doi.org/10.3102/00346543042003237
  • Goldman, S. H., & Raju, N. S. (1986). Recovery of one-and two-parameter logistic item parameters: An empirical study. Educational and Psychological Measurement, 46(1), 11-21. https://doi.org/10.1177/0013164486461002
  • Hair, J. F., Black W. C., Babin, B. J., & Anderson, R. E. (2019). Multivariate data analysis. (8th edition). Annabel Ainscow.
  • Han, K. T. (2007). WinGen: windows software that generates irt parameters and item responses. Applied Psychological Measurement, 31(5), 457 459. https://doi.org/10.1177/0146621607299271
  • Hanson, B. A. (1998, October). IRT parameter estimation using the EM algorithm. http://www.b-a-h.com/papers/note9801.pdf
  • Harwell, M. (1997). Analyzing the results of monte carlo studies in item response theory. Educational and Psychological Measurement, 57(2), 266-279. https://doi.org/10.1177/0013164497057002006
  • Harwell, M. R., & Baker, F. B. (1991). The use of prior distributions in marginalized Bayesian item parameter estimation: A didactic. Applied Psychological Measurement, 15(4), 375–389. https://doi.org/10.1177/014662169101500409
  • Harwell, M., Stone, C. A., Hsu, T. C., & Kirisci, L. (1996). Monte carlo studies in item response theory. Applied Psychological Measurement, 20(2), 101 125. https://doi.org/10.1177/014662169602000201
  • Hulin, C. L., Lissak, R. I., & Drasgow, F. (1982). Recovery of two and three-parameter logistic item characteristic curves: A Monte Carlo study. Applied Psychological Measurement, 6(3), 249–260. https://doi.org/10.1177/014662168200600301
  • Jiang, S., Wang, C., & Weiss, D. J. (2016). Sample size requirements for estimation of item parameters in the multidimensional graded response model. Frontiers in psychology, 7, 109. https://doi.org/10.3389/fpsyg.2016.00109
  • Kirisci, L., Hsu, T. C., & Yu, L. (2001). Robustness of item parameter estimation programs to assumptions of unidimensionality and normality. Applied Psychological Measurement, 25(2), 146-162. https://doi.org/10.1177/01466210122031975
  • Kleijnen, J. P. (1987). Statistical tools for simulation practitioners. Marcel Dekker.
  • Lee, S., Bulut, O., & Suh, Y. (2017). Multidimensional extension of multiple indicators multiple causes models to detect DIF. Educational and Psychological Measurement, 77(4), 545–569. https://doi.org/10.1177/0013164416651116
  • Lord, F. M. (1968). An analysis of the verbal scholastic aptitude test using Birnbaum's three-parameter logistic model. Educational and Psychological Measurement, 28(4), 989-1020. https://doi.org/10.1177/001316446802800401
  • Matlock, K. L., & Turner, R. (2016). Unidimensional IRT item parameter estimates across equivalent test forms with confounding specifications within dimensions. Educational and Psychological Measurement, 76(2), 258 279. https://doi.org/10.1177/0013164415589756
  • Matlock Cole, K., & Paek, I. (2017). PROC IRT: A SAS procedure for item response theory. Applied Psychological Measurement, 41(4), 311 320. https://doi.org/10.1177/0146621616685062
  • McDonald, R. P. (1982). Linear Versus Models in Item Response Theory. Applied Psychological Measurement, 6(4), 379 396. https://doi.org/10.1177/014662168200600402
  • Mislevy, R. J., & Stocking, M. L. (1989). A consumer's guide to LOGIST and BILOG. Applied Psychological Measurement, 13(1), 57 75. https://doi.org/10.1177/014662168901300106
  • Mooney, C. Z. (1997). Monte Carlo simulation. Thousand Oaks, CA: Sage.
  • Mundform, D. J., Schaffer, J., Kim, M. J., Shaw, D., Thongteeraparp, A., & Supawan, P. (2011). Number of replications required in Monte Carlo simulation studies: A synthesis of four studies. Journal of Modern Applied Statistical Methods, 10(1), 19-28. https://doi.org/10.22237/jmasm/1304222580
  • Park, Y. S., Lee, Y. S., & Xing, K. (2016). Investigating the impact of item parameter drift for item response theory models with mixture distributions. Frontiers in Psychology, 7, 255. https://doi.org/10.3389/fpsyg.2016.00255
  • Patsias, K., Sheng, Y., & Rahimi, S. (2009, September 24-26). A high performance Gibbs sampling algorithm for item response theory. 22nd International Conference on Parallel and Distributed Computing and Communication Systems, Kentucky, USA.
  • Patsula, L. N., & Gessaroli, M. E. A (1995, April). Comparison of item parameter estimates and iccs produced. https://files.eric.ed.gov/fulltext/ED414333.pdf
  • Preecha, C. (2004). Numbers of replications required in ANOVA simulation studies [Doctoral dissertation, University of Northern Colorado]. ProQuest Dissertations and Theses Global.
  • Reise, S. P., & Yu, J. (1990). Parameter recovery in the graded response model using MULTILOG. Journal of Educational Measurement, 27(2), 133 144. https://www.jstor.org/stable/1434973
  • Reise, S., Moore, T., & Maydeu-Olivares, A. (2011). Target rotations and assessing the impact of model violations on the parameters of unidimensional item response theory models. Educational and Psychological Measurement, 71(4), 684-711. https://doi.org/10.1177/0013164410378690
  • Roberts, J. S., Donoghue, J. R., & Laughlin, J. E. (2002). Characteristics of MML/EAP parameter estimates in the generalized graded unfolding model. Applied Psychological Measurement, 26(2), 192-207. https://doi.org/10.1177/01421602026002006
  • Rubinstein, R. Y. (1981). Simulation and the Monte Carlo method. John Wiley and Sons, New York. https://doi.org/10.1002/9780470316511
  • Sahin, A., & Anil, D. (2017). The effects of test length and sample size on item parameters in item response theory. Educational Sciences: Theory & Practice, 17(1), 321-33. https://doi.org/10.12738/estp.2017.1.0270
  • Sarkar, D. (2008). Lattice: multivariate data visualization with R. Springer, New York.
  • Schumacker, R. E, Smith, R. M., & Bush, J. M. (1994, April). Examining replication effects in Rasch fit statistics. American Educational Research Association Annual Meeting, New Orleans.
  • Sen, S., Cohen, A. S., & Kim, S. H. (2016). The impact of non-normality on extraction of spurious latent classes in mixture IRT models. Applied Psychological Measurement, 40(2), 98-113. https://doi.org/10.1177/0146621615605080
  • Sheng, Y., & Wikle, C. K. (2007). Comparing multiunidimensional and unidimensional item response theory models. Educational and Psychological Measurement, 67(6), 899-919. https://doi.org/10.1177/0013164406296977
  • Tavares, H. R., Andrade, D. F. D., & Pereira, C. A. D. B. (2004). Detection of determinant genes and diagnostic via item response theory. Genetics and Molecular Biology, 27(4), 679-685. https://doi.org/10.1590/S1415-47572004000400033
  • Thissen, D., & Wainer, H. (1982). Some standard errors in item response theory. Psychometrika, 47(4), 397-412. https://doi.org/10.1007/BF02293705
  • Thompson, B. (2004). Exploratory and confirmatory factor analysis. Amer Psychological Assn.
  • Thompson, B. (2006). Foundations of behavioral statistics: An insight-based approach. Guilford Press.
  • Thompson, N. A. (2009). Ability estimation with item response theory. Assessment Systems Corporation. https://assess.com/docs/Thompson_(2009)_ _Ability_estimation_with_IRT.pdf
  • Şengül Avşar, A., & Tavşancıl, E. (2017). Examination of polytomous items’ psychometric properties according to nonparametric item response theory models in different test conditions. Educational Sciences: Theory & Practice, 17(2). https://doi.org/10.12738/estp.2017.2.0246
  • van der Linden, W. J. (Ed.). (2018). Handbook of item response theory, three volume set. CRC Press.
  • van Onna, M. J. H. (2004). Ordered latent class models in nonparametric item response theory. [Doctoral dissertation]. University of Groningen.
  • Weissman, A. (2013). Optimizing information using the EM algorithm in item response theory. Annals of Operations Research, 206(1), 627 646. https://doi.org/10.1007/s10479-012-1204-4
  • Yang, S. (2007). A comparison of unidimensional and multidimensional RASCH models using parameter estimates and fit indices when assumption of unidimensionality is violated [Doctoral dissertation, The Ohio State University]. ProQuest Dissertations and Theses Global.
  • Yen, W. M. (1987). A comparison of the efficiency and accuracy of BILOG and LOGIST. Psychometrika, 52(2), 275-291. https://doi.org/10.1007/BF02294241
  • Zhang, B. (2008). Application of unidimensional item response models to tests with items sensitive to secondary dimensions. The Journal of Experimental Education, 77(2), 147-166. https://doi.org/10.3200/JEXE.77.2.147-166
There are 64 citations in total.

Details

Primary Language English
Subjects Studies on Education
Journal Section Articles
Authors

Fulya Baris Pekmezci 0000-0001-6989-512X

Asiye Şengül Avşar 0000-0001-5522-2514

Publication Date June 10, 2021
Submission Date September 4, 2020
Published in Issue Year 2021

Cite

APA Baris Pekmezci, F., & Şengül Avşar, A. (2021). A Guide for More Accurate and Precise Estimations in Simulative Unidimensional IRT Models. International Journal of Assessment Tools in Education, 8(2), 423-453. https://doi.org/10.21449/ijate.790289

23823             23825             23824