Research Article
BibTex RIS Cite

Effects of Various Simulation Conditions on Latent-Trait Estimates: A Simulation Study

Year 2018, Volume: 5 Issue: 2, 263 - 273, 19.05.2018

Abstract

The
aim of this simulation study, determine the relationship between true latent
scores and estimated latent scores by including various control variables and
different statistical models. The study also aimed to compare the statistical
models and determine the effects of different distribution types, response
formats and sample sizes on latent score estimations. 108 different data bases,
comprised of three different distribution types (positively skewed, normal,
negatively skewed), three response formats (three-, five- and seven-level
likert) and four different sample sizes (100, 250, 500, 1000) were used in the
present study. Results show that, distribution types and response formats, in
almost all simulations, have significant effect on determination coefficients.
When the general performance of the models are evaluated, it can be said that
MR and GRM display a better performance than the other models. Particularly in
situations when the distribution is either negatively or positively skewed and
when the sample size is small, these models display a rather good performance.

References

  • Allahyari, E., Jafari, P., & Bagheri, Z. (2016). A simulation study to assess the effect of the number of response categories on the power of ordinal logistic regression for differential ıtem functioning analysis in rating scales. Computational and mathematical methods in medicine, vol. 2016, Article ID 5080826. doi.org/10.1155/2016/5080826
  • Bartholomew, D. J., Knott, M., & Moustaki, I. (2011). Latent variable models and factor analysis: A unified approach (Vol. 904). John Wiley & Sons. doi.org/10.1002/9781119970583
  • Borsboom, D., & Mellenbergh, G. J. (2002). True scores, latent variables, and constructs: A comment on Schmidt and Hunter. Intelligence, 30(6), 505-514. doi.org/10.1016/S0160-2896(02)00082-X
  • Brzezińska, J. (2016). Latent variable modelling and item response theory analyses in marketing research. Folia Oeconomica Stetinensia, 16(2), 163-174. doi.org/10.1515/foli-2016-0032
  • Crocker, L., & Algina, J. (1986). Introduction to classical and modern test theory. Holt, Rinehart and Winston, 6277 Sea Harbor Drive, Orlando, FL 32887.
  • Cyr, A., & Davies, A. (2005). Item response theory and latent variable modeling for surveys with complex sampling design: The case of the national longitudinal survey of children and youth in Canada. In conference of the Federal Committee on Statistical Methodology, Office of Management and Budget, Arlington, VA.
  • Dawber, T., Rogers, W. T., & Carbonaro, M. (2009). Robustness of Lord's formulas for item difficulty and discrimination conversions between classical and item response theory models. Alberta Journal of Educational Research, 55(4), 512.
  • DeCoster, J. (1998). Overview of factor analysis. Retrieved June 12, 2017 from http://www.stat-help.com/factor.pdf
  • Dumenci, L., & Achenbach, T. M. (2008). Effects of estimation methods on making trait-level inferences from ordered categorical items for assessing psychopathology. Psychological assessment, 20(1), 55-62. doi.org/10.1037/1040-3590.20.1.55
  • Han, K. T. (2007). WinGen: Windows software that generates item response theory parameters and item responses. Applied Psychological Measurement, 31(5), 457-459. doi.org/10.1177/0146621607299271
  • Hauck Filho, N., Machado, W. D. L., & Damásio, B. F. (2014). Effects of statistical models and items difficulties on making trait-level inferences: A simulation study. Psicologia: Reflexão e Crítica, 27(4), 670-678. doi.org/10.1590/1678-7153.201427407
  • Jafari, P., Bagheri, Z., Ayatollahi, S. M. T., & Soltani, Z. (2012). Using Rasch rating scale model to reassess the psychometric properties of the Persian version of the PedsQL TM 4.0 Generic Core Scales in school children. Health and Quality of Life Outcomes, 10(1), 27. doi.org/10.1186/1477-7525-10-27
  • Kline, R. B. (2005). Principles and practice of structural equation modeling (Second Edition). New York: The Guilford Publications.
  • Li, C. H. (2016). Confirmatory factor analysis with ordinal data: Comparing robust maximum likelihood and diagonally weighted least squares. Behavior Research Methods, 48(3), 936-949. doi.org/10.3758/s13428-015-0619-7
  • Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Reading, MA: Addison-Wesley.
  • Lorenzo-Seva, U., & Ferrando, P. J. (2006). FACTOR: A computer program to fit the exploratory factor analysis model. Behavior research methods, 38(1), 88-91. doi.org/10.3758/BF03192753
  • Lozano, L. M., García-Cueto, E., & Muñiz, J. (2008). Effect of the number of response categories on the reliability and validity of rating scales. Methodology, 4(2), 73-79. doi.org/10.1027/1614-2241.4.2.73
  • Maydeu-Olivares, A., Kramp, U., García-Forero, C., Gallardo-Pujol, D., & Coffman, D. (2009). The effect of varying the number of response alternatives in rating scales: Experimental evidence from intra-individual effects. Behavior Research Methods, 41(2), 295-308. doi.org/10.3758/BRM.41.2.295
  • Mellenbergh, G. J. (1996). Measurement precision in test score and item response models. Psychological Methods, 1, 293 – 299. doi.org/10.1037/1082-989X.1.3.293
  • Raykov, T. ve Marcoulides, G. A. (2000). A first course in structural equation modeling. London: Lawrence Erlbaum Associates, Inc.
  • Revelle, W. (2017). Package ‘psych’. Retrieved from https://cran.r-project.org/web/packages/psych/psych.pdf
  • Rizopoulos, D. (2017). Package ‘ltm’. Retrieved from https://cran.r-project.org/web/packages/ltm/ltm.pdf
  • Samejima, F. (1968). Estimation of latent ability using a response pattern of graded scores. Psychometrika Monographs, 34(Suppl. 17).
  • Saporta, G., & Niang, N. (2009). Principal component analysis: Application to statistical process control. Data analysis, 1-23. doi.org/10.1002/9780470611777.ch1

Effects of Various Simulation Conditions on Latent-Trait Estimates: A Simulation Study

Year 2018, Volume: 5 Issue: 2, 263 - 273, 19.05.2018

Abstract

The aim of this simulation study, determine the relationship between true latent scores and estimated latent scores by including various control variables and different statistical models. The study also aimed to compare the statistical models and determine the effects of different distribution types, response formats and sample sizes on latent score estimations. 108 different data bases, comprised of three different distribution types (positively skewed, normal, negatively skewed), three response formats (three-, five- and seven-level likert) and four different sample sizes (100, 250, 500, 1000) were used in the present study. Results show that, distribution types and response formats, in almost all simulations, have significant effect on determination coefficients. When the general performance of the models are evaluated, it can be said that MR and GRM display a better performance than the other models. Particularly in situations when the distribution is either negatively or positively skewed and when the sample size is small, these models display a rather good performance.

References

  • Allahyari, E., Jafari, P., & Bagheri, Z. (2016). A simulation study to assess the effect of the number of response categories on the power of ordinal logistic regression for differential ıtem functioning analysis in rating scales. Computational and mathematical methods in medicine, vol. 2016, Article ID 5080826. doi.org/10.1155/2016/5080826
  • Bartholomew, D. J., Knott, M., & Moustaki, I. (2011). Latent variable models and factor analysis: A unified approach (Vol. 904). John Wiley & Sons. doi.org/10.1002/9781119970583
  • Borsboom, D., & Mellenbergh, G. J. (2002). True scores, latent variables, and constructs: A comment on Schmidt and Hunter. Intelligence, 30(6), 505-514. doi.org/10.1016/S0160-2896(02)00082-X
  • Brzezińska, J. (2016). Latent variable modelling and item response theory analyses in marketing research. Folia Oeconomica Stetinensia, 16(2), 163-174. doi.org/10.1515/foli-2016-0032
  • Crocker, L., & Algina, J. (1986). Introduction to classical and modern test theory. Holt, Rinehart and Winston, 6277 Sea Harbor Drive, Orlando, FL 32887.
  • Cyr, A., & Davies, A. (2005). Item response theory and latent variable modeling for surveys with complex sampling design: The case of the national longitudinal survey of children and youth in Canada. In conference of the Federal Committee on Statistical Methodology, Office of Management and Budget, Arlington, VA.
  • Dawber, T., Rogers, W. T., & Carbonaro, M. (2009). Robustness of Lord's formulas for item difficulty and discrimination conversions between classical and item response theory models. Alberta Journal of Educational Research, 55(4), 512.
  • DeCoster, J. (1998). Overview of factor analysis. Retrieved June 12, 2017 from http://www.stat-help.com/factor.pdf
  • Dumenci, L., & Achenbach, T. M. (2008). Effects of estimation methods on making trait-level inferences from ordered categorical items for assessing psychopathology. Psychological assessment, 20(1), 55-62. doi.org/10.1037/1040-3590.20.1.55
  • Han, K. T. (2007). WinGen: Windows software that generates item response theory parameters and item responses. Applied Psychological Measurement, 31(5), 457-459. doi.org/10.1177/0146621607299271
  • Hauck Filho, N., Machado, W. D. L., & Damásio, B. F. (2014). Effects of statistical models and items difficulties on making trait-level inferences: A simulation study. Psicologia: Reflexão e Crítica, 27(4), 670-678. doi.org/10.1590/1678-7153.201427407
  • Jafari, P., Bagheri, Z., Ayatollahi, S. M. T., & Soltani, Z. (2012). Using Rasch rating scale model to reassess the psychometric properties of the Persian version of the PedsQL TM 4.0 Generic Core Scales in school children. Health and Quality of Life Outcomes, 10(1), 27. doi.org/10.1186/1477-7525-10-27
  • Kline, R. B. (2005). Principles and practice of structural equation modeling (Second Edition). New York: The Guilford Publications.
  • Li, C. H. (2016). Confirmatory factor analysis with ordinal data: Comparing robust maximum likelihood and diagonally weighted least squares. Behavior Research Methods, 48(3), 936-949. doi.org/10.3758/s13428-015-0619-7
  • Lord, F. M., & Novick, M. R. (1968). Statistical theories of mental test scores. Reading, MA: Addison-Wesley.
  • Lorenzo-Seva, U., & Ferrando, P. J. (2006). FACTOR: A computer program to fit the exploratory factor analysis model. Behavior research methods, 38(1), 88-91. doi.org/10.3758/BF03192753
  • Lozano, L. M., García-Cueto, E., & Muñiz, J. (2008). Effect of the number of response categories on the reliability and validity of rating scales. Methodology, 4(2), 73-79. doi.org/10.1027/1614-2241.4.2.73
  • Maydeu-Olivares, A., Kramp, U., García-Forero, C., Gallardo-Pujol, D., & Coffman, D. (2009). The effect of varying the number of response alternatives in rating scales: Experimental evidence from intra-individual effects. Behavior Research Methods, 41(2), 295-308. doi.org/10.3758/BRM.41.2.295
  • Mellenbergh, G. J. (1996). Measurement precision in test score and item response models. Psychological Methods, 1, 293 – 299. doi.org/10.1037/1082-989X.1.3.293
  • Raykov, T. ve Marcoulides, G. A. (2000). A first course in structural equation modeling. London: Lawrence Erlbaum Associates, Inc.
  • Revelle, W. (2017). Package ‘psych’. Retrieved from https://cran.r-project.org/web/packages/psych/psych.pdf
  • Rizopoulos, D. (2017). Package ‘ltm’. Retrieved from https://cran.r-project.org/web/packages/ltm/ltm.pdf
  • Samejima, F. (1968). Estimation of latent ability using a response pattern of graded scores. Psychometrika Monographs, 34(Suppl. 17).
  • Saporta, G., & Niang, N. (2009). Principal component analysis: Application to statistical process control. Data analysis, 1-23. doi.org/10.1002/9780470611777.ch1
There are 24 citations in total.

Details

Primary Language English
Subjects Studies on Education
Journal Section Articles
Authors

Hakan Koğar 0000-0001-5749-9824

Publication Date May 19, 2018
Submission Date January 10, 2018
Published in Issue Year 2018 Volume: 5 Issue: 2

Cite

APA Koğar, H. (2018). Effects of Various Simulation Conditions on Latent-Trait Estimates: A Simulation Study. International Journal of Assessment Tools in Education, 5(2), 263-273.

23824         23823             23825