Research Article
BibTex RIS Cite

PISA 2012 Okuduğunu Anlama Testine Verilen Yanıtların Boyutluluğunun İki Faktör Modeline Dayalı Olarak İncelenmesi

Year 2019, Volume: 20 Issue: 1, 67 - 77, 31.07.2019
https://doi.org/10.12984/egeefd.470194

Abstract

Bu araştırmanın amacı yanıtlayıcıların PISA 2012
okuduğunu anlama testi maddelerine verdiği yanıtlardan elde edilen veri setinin
boyutluluğunun iki parametreli lojistik model ve iki faktör modeliyle elde
edilen madde parametreleri ve madde düzeyinde model veri uyumu
istatistiklerinin karşılaştırılması yoluyla incelenmesidir. PISA 2012 Türkiye
uygulamasında dört okuma metnine dayalı olarak geliştirilmiş 14 madde içeren 12
numaralı test formu 284 kişilik öğrenci grubu üzerinde uygulanmıştır. Madde
Tepki Kuramı modelleri arasındaki karşılaştırmalar madde ayırt edicilik
parametrelerine, S-χ
2 madde uyum istatistiklerine ve açıklanan ortak
varyans değerlerine dayalı olarak yapılmıştır. Analizlerin sonuçları, genel
boyut ayırt edicilik parametrelerinin tek boyutlu iki parametreli lojistik
model madde parametreleriyle benzer olduğunu göstermiştir. İki faktör modeli
madde düzeyinde model veri uyumunda tek boyutlu modele göre bir miktar iyileşme
sağlamış olsa da bu iyileşmenin önemli olmadığı bulunmuştur. Madde model veri
uyumu açısından iki model de benzer sonuçlar sağlamıştır. Bu bulgulara dayalı
olarak, genel boyuta karşılık gelen okuduğunu anlama becerisinin test verisinin
altında yatan baskın boyut olduğu, ortak metinlerden kaynaklı varyansın küçük
ve önemsiz olduğu, verinin (yaklaşık) tek boyutluluk varsayımını sağladığı
sonucuna varılmıştır.

References

  • Ackerman, T.A. (1994). Using multidimensional item response theory to understand what items and tests are measuring, Applied Measurement in Education, 7, 255-278.
  • Berkeljon, A. (2012). Multidimensional item response theory in clinical measurement: A bifactor graded- response model analysis of the outcome questionnaire-45.2. (Unpublished Dissertation). Brigham Young University, Utah.
  • Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York: The Guilford Press.
  • Chalmers, R. P. (2012). A multidimensional item response theory package for the R environment. Journal of Statistical Software, 48(6), 1-29.
  • Chon, K. H., Lee, W., & Ansley, T. N. (2007). Assessing IRT model-data fit for mixed format tests. Iowa: Center for Advanced Studies in Measurement and Assessment.
  • De Ayala, R. J. (2009). The theory and practice of item response theory. New York: The Guilford Press.
  • DeMars, C. (2006). Item response theory. New York: Oxford University Press, Inc.
  • Deng, N., Wells, C., & Hambleton, R. (2008). A confirmatory factor analytic study examining the dimensionality of educational achievement tests. Connecticut: NERA Conference Proceedings.
  • Drasgow, F., & Parsons, C. K. (1983). Application of unidimensional item response theory models to multidimensional data. Applied Psychological Measurement, 7, 189–199.
  • Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annu. Rev. Psychol., 60, 549–576.
  • Hambleton, R. K., & Swaminathan, H. (1983). Fundemantals of item response theory. Newbury Park, CA: Sage.
  • Kang, T., & Chen, T. T. (2011). Performance of the generalized S-χ2 item fit index for thegraded gesponse godel. Asia Pacific Educ. Rev., 12, 89-96.
  • Kline, R. B. (2011). Principles and practice of structural equation modeling. NY: The Guilford Press.
  • Liu, J. (2007). Comparing multidimensional and unidimensional computer adaptive strategies in psychological and health assessment. (Unpublished Dissertation). University of North Carolina at Chapel Hill, North Carolina.
  • Luo, Y., & Al-Harbi, K. (2016). The Utility of the Bifactor Method for unidimensionality assessment when other methods disagree: an empirical illustration. SAGE Open, 1-7.
  • Orlando, M., & Thissen, D. (2000). Likelihood-based item-fit indices for dichotomous itemresponse theory models. Applied Psychological Measurement, 24, 50-64.
  • Ostini, R., & Nering, M. L. (2006). Polytomous item response theory models. California: Sage Publications.
  • Periard, D. (2016). A bifactor model of burnout? An item response theory analysis of the Maslach burnout inventory. (Unpublished Dissertation). Wright State University, Dayton.
  • Quinn, H. O. (2014). Bifactor models, explained common variance (ECV) and the usefulness of scores from unidimensional item response theory analyses. (Unpublished Master’s Thesis). University of North Carolina at Chapel Hill, North Carolina.
  • Reckase M. D, & McKinley, R. L. (1991) The discriminating power of items that measure more than one dimension. Applied Psychological Measurement, 15, 361-373.
  • Reckase, M. D. (2009). Multidimensional item response theory. New York: Springer.
  • Reise, S. P. (1990). A comparison of item- and person-fit methods of assessing model-data fit in IRT. Applied Psychological Measurement, 14(2), 127-137.
  • Reise, S. P., Bonifay, W. E., & Haviland, M. G. (2013). Scoring and modeling psychological measures in the presence of multidimensionality. Journal of Personality Assessment, 95(2), 129-140.
  • Reise, S. P., Morizot, J., & Hays, R. D. (2007). The role of the bifactor model in resolving dimensionality issues in health outcomes measures. Qual Life Res,16, 19-31.
  • Reise, S. P., Scheines, R., Widaman, K. F., & Haviland, M. G. (2013). Multidimensionality and structural coefficient bias in structural equation modeling: A bifactor perspective. Educational and Psychological Measurement, 73(1), 5-26.
  • So, Y. (2010). Dimensionality of responses to a reading comprehension assessment and its implications to scoring test takers on their reading proficiency. (Unpublished Dissertation). University of California, Los Angeles.
  • Stout, W. F. (1990). A new item response theory modeling approach with applications to unidimensionality assessment and ability estimation. Psychometrika, 55, 293-325.
  • Tabachnick, B. G., & Fidel, L. S. (2001). Using multivariate statistics. MA: Allyn & Bacon, Inc.
  • Tate, R. (2003). A comparison of selected empirical methods for assessing the structure of responses to test items. Applied Psychological Measurement, 27, 159–203.
  • Thissen, D., & Wainer, H. (2001). Test scoring. NJ: Lawrence Erlbaum Associates, Inc.
  • Walker, C. M., & Beretvas, S. N. (2003). Comparing multidimensional and unidimensional proficiency classifications: Multidimensional IRT as a diagnostic aid. Journal of Educational Measurement, 40(3), 255-275.
  • Wang, W., Chen, P., & Cheng, Y. (2004). Improving measurement precision of test batteries using multidimensional item response models. Psychological Methods, 9(1), 116-136.
  • Zhang, M. (2016). Exploring dimensionality of scores for mixed-format tests. (Unpublished Dissertation). University of Iowa, Iowa.

Examining Dimensionality of Responses to the PISA 2012 Reading Comprehension Test Based on the Bifactor Model

Year 2019, Volume: 20 Issue: 1, 67 - 77, 31.07.2019
https://doi.org/10.12984/egeefd.470194

Abstract

The objective of this research is to examine
dimensionality of the data set obtained from the test takers’ responses to the
PISA 2012 reading literacy test by comparing item parameters, and item level
model data fits estimated based on the two-parameter logistic model and the
bifactor model. The PISA 2012 Reading Literacy Test Booklet 12, including
fourteen items related to four reading texts, was conducted on a group of 284
students. Model comparisons were done based on item discrimination parameters,
S-χ
2 item fit statistics, and the index of explained common variance
calculated based on item parameters. Results of the analyses indicate that item
discrimination parameters estimated on the general dimension are similar to the
two-parameter logistic model item parameters. The bifactor model provided some
improvement on the item level fit over the one-dimensional model, however this
improvement is not meaningful. Both models produced similar results in terms of
the item data fit. Based on these findings, it was concluded that the general
dimension representing reading comprehension skill is the dominant dimension
underlying the data, and the text effect is small enough to accept that the
data holds (essential) the unidimensionality assumption.

References

  • Ackerman, T.A. (1994). Using multidimensional item response theory to understand what items and tests are measuring, Applied Measurement in Education, 7, 255-278.
  • Berkeljon, A. (2012). Multidimensional item response theory in clinical measurement: A bifactor graded- response model analysis of the outcome questionnaire-45.2. (Unpublished Dissertation). Brigham Young University, Utah.
  • Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York: The Guilford Press.
  • Chalmers, R. P. (2012). A multidimensional item response theory package for the R environment. Journal of Statistical Software, 48(6), 1-29.
  • Chon, K. H., Lee, W., & Ansley, T. N. (2007). Assessing IRT model-data fit for mixed format tests. Iowa: Center for Advanced Studies in Measurement and Assessment.
  • De Ayala, R. J. (2009). The theory and practice of item response theory. New York: The Guilford Press.
  • DeMars, C. (2006). Item response theory. New York: Oxford University Press, Inc.
  • Deng, N., Wells, C., & Hambleton, R. (2008). A confirmatory factor analytic study examining the dimensionality of educational achievement tests. Connecticut: NERA Conference Proceedings.
  • Drasgow, F., & Parsons, C. K. (1983). Application of unidimensional item response theory models to multidimensional data. Applied Psychological Measurement, 7, 189–199.
  • Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annu. Rev. Psychol., 60, 549–576.
  • Hambleton, R. K., & Swaminathan, H. (1983). Fundemantals of item response theory. Newbury Park, CA: Sage.
  • Kang, T., & Chen, T. T. (2011). Performance of the generalized S-χ2 item fit index for thegraded gesponse godel. Asia Pacific Educ. Rev., 12, 89-96.
  • Kline, R. B. (2011). Principles and practice of structural equation modeling. NY: The Guilford Press.
  • Liu, J. (2007). Comparing multidimensional and unidimensional computer adaptive strategies in psychological and health assessment. (Unpublished Dissertation). University of North Carolina at Chapel Hill, North Carolina.
  • Luo, Y., & Al-Harbi, K. (2016). The Utility of the Bifactor Method for unidimensionality assessment when other methods disagree: an empirical illustration. SAGE Open, 1-7.
  • Orlando, M., & Thissen, D. (2000). Likelihood-based item-fit indices for dichotomous itemresponse theory models. Applied Psychological Measurement, 24, 50-64.
  • Ostini, R., & Nering, M. L. (2006). Polytomous item response theory models. California: Sage Publications.
  • Periard, D. (2016). A bifactor model of burnout? An item response theory analysis of the Maslach burnout inventory. (Unpublished Dissertation). Wright State University, Dayton.
  • Quinn, H. O. (2014). Bifactor models, explained common variance (ECV) and the usefulness of scores from unidimensional item response theory analyses. (Unpublished Master’s Thesis). University of North Carolina at Chapel Hill, North Carolina.
  • Reckase M. D, & McKinley, R. L. (1991) The discriminating power of items that measure more than one dimension. Applied Psychological Measurement, 15, 361-373.
  • Reckase, M. D. (2009). Multidimensional item response theory. New York: Springer.
  • Reise, S. P. (1990). A comparison of item- and person-fit methods of assessing model-data fit in IRT. Applied Psychological Measurement, 14(2), 127-137.
  • Reise, S. P., Bonifay, W. E., & Haviland, M. G. (2013). Scoring and modeling psychological measures in the presence of multidimensionality. Journal of Personality Assessment, 95(2), 129-140.
  • Reise, S. P., Morizot, J., & Hays, R. D. (2007). The role of the bifactor model in resolving dimensionality issues in health outcomes measures. Qual Life Res,16, 19-31.
  • Reise, S. P., Scheines, R., Widaman, K. F., & Haviland, M. G. (2013). Multidimensionality and structural coefficient bias in structural equation modeling: A bifactor perspective. Educational and Psychological Measurement, 73(1), 5-26.
  • So, Y. (2010). Dimensionality of responses to a reading comprehension assessment and its implications to scoring test takers on their reading proficiency. (Unpublished Dissertation). University of California, Los Angeles.
  • Stout, W. F. (1990). A new item response theory modeling approach with applications to unidimensionality assessment and ability estimation. Psychometrika, 55, 293-325.
  • Tabachnick, B. G., & Fidel, L. S. (2001). Using multivariate statistics. MA: Allyn & Bacon, Inc.
  • Tate, R. (2003). A comparison of selected empirical methods for assessing the structure of responses to test items. Applied Psychological Measurement, 27, 159–203.
  • Thissen, D., & Wainer, H. (2001). Test scoring. NJ: Lawrence Erlbaum Associates, Inc.
  • Walker, C. M., & Beretvas, S. N. (2003). Comparing multidimensional and unidimensional proficiency classifications: Multidimensional IRT as a diagnostic aid. Journal of Educational Measurement, 40(3), 255-275.
  • Wang, W., Chen, P., & Cheng, Y. (2004). Improving measurement precision of test batteries using multidimensional item response models. Psychological Methods, 9(1), 116-136.
  • Zhang, M. (2016). Exploring dimensionality of scores for mixed-format tests. (Unpublished Dissertation). University of Iowa, Iowa.
There are 33 citations in total.

Details

Primary Language Turkish
Subjects Other Fields of Education
Journal Section Articles
Authors

Seval Kula Kartal 0000-0002-3018-6972

Publication Date July 31, 2019
Published in Issue Year 2019 Volume: 20 Issue: 1

Cite

APA Kula Kartal, S. (2019). PISA 2012 Okuduğunu Anlama Testine Verilen Yanıtların Boyutluluğunun İki Faktör Modeline Dayalı Olarak İncelenmesi. Ege Eğitim Dergisi, 20(1), 67-77. https://doi.org/10.12984/egeefd.470194