Araştırma Makalesi
BibTex RIS Kaynak Göster

Can Factor Scores be Used Instead of Total Score and Ability Estimation?

Yıl 2019, Cilt: 6 Sayı: 1, 25 - 35, 21.03.2019
https://doi.org/10.21449/ijate.442542

Öz

The
purpose of this study is to investigate whether factor scores can be used
instead of ability estimation and total score. For this purpose, the relationships
among total score, ability estimation, and factor scores were investigated. In
the research, Turkish subtest data from the Transition from Primary to
Secondary Education (TEOG) exam applied in April 2014 were used. Total scores
in this study were calculated from the total number of correct answers given by
individuals to each item. Ability estimations were obtained from a three-parameter
logistic model chosen from among item response theory (IRT) models. The Bartlett
method was used for factor score estimation. Thus, the ability estimation, sum,
and factor scores of each individual were obtained. When the relationship
between these variables was investigated, it was observed that there was a high-level,
positive, and statistically significant relationship. In the result section of
this study, as variables have a high-level relationship, it was suggested that
since variables could be used interchangeably, factor scores should be used.
Although the total scores of individuals were equal, there were differences in
terms of factor score and ability estimations. Therefore, it was suggested that
item response theory assumptions were not met, or factor scores should be used
when the sample size is small.

Kaynakça

  • Akyıldız, M., & Şahin, M. D. (2017). Açıköğretimde kullanılan sınavlardan Klasik Test Kuramına ve Madde Tepki Kuramına göre elde edilen yetenek ölçülerinin karşılaştırılması. Açıköğretim Uygulamaları ve Araştırmaları Dergisi, 3(4), 141–159.
  • Anderson, T. W., & Rubin, H. (1956). Statistical inference in factor analysis. In J. Neyman (Ed.), Proceedings of the Third Berkeley Symposium of Mathematical Statistics and Probability (Vol. 5, pp. 111–150). Berkeley: University of California Press.
  • Bulut, G. (2018). Açık ve uzaktan öğrenmede şans başarısı : Klasik Test Kuramı (KTK) ve Madde Tepki Kurama (MTK) temelinde karşılaştırmalı bir analiz. Açıköğretim Uygulamaları ve Araştırmaları Dergisi, 4(1), 78–93.
  • Büyüköztürk, Ş., Kılıç-Çakmak, E., Akgün, Ö. E., Karadeniz, Ş., & Demirel, F. (2013). Bilimsel araştırma yöntemleri (14. Baskı). Ankara: Pegem Akademi.
  • Byrne, B. M. (2016). Structural equation modeling with AMOS: Basic concepts, applications, and programming (3rd ed.). Routledge.
  • Çakıcı-Eser, D. (2013). PISA 2009 okuma testinden elde edilen iki kategorili verilerin BILOG programı ile incelenmesi. Eğitim ve Öğretim Araştırmaları Dergisi, 2(4), 135–144.
  • Cappelleri, J. C., Jason Lundy, J., & Hays, R. D. (2014). Overview of Classical Test Theory and Item Response Theory for the quantitative assessment of items in developing patient-reported outcomes measures. Clinical Therapeutics, 36(5), 648–662. https://doi.org/10.1016/j.clinthera.2014.04.006
  • Çelen, Ü., & Aybek, E. C. (2013). Öğrenci başarısının öğretmen yapımı bir testle klasik test kuramı ve madde tepki kuramı yöntemleriyle elde edilen puanlara göre karşılaştırılması. Eğitimde ve Psikolojide Ölçme ve Değerlendirme Dergisi, 4(2), 64–75. Retrieved from http://dergipark.ulakbim.gov.tr/epod/article/view/5000045503
  • Chou, C. P., & Bentler, P. M. (1995). Estimates and tests in structural equation modeling. In R. H. Hoyle (Ed.), Structural equation modeling: Concepts, issues, and applications. Thousand Oaks, CA: SAGE.
  • Comrey, A. L. (1988). Factor-analytic methods of scale development in personality and clinical psychology. Journal of Consulting and Clinical Psychology, 56(5), 754–761. https://doi.org/10.1037/0022-006X.56.5.754
  • Comrey, A. L., & Lee, H. B. (1992). A first course in factor analysis (2nd ed.). London: Lawrence Erlbaum Associates, Inc. https://doi.org/10.1017/CBO9781107415324.004
  • Costello, A. B., & Osborne, J. W. (2005). Best practices in explatory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research & Evaluation, 10(7), 27–29. https://doi.org/10.1.1.110.9154
  • Creswell, J. W. (2013). Research design: Qualitative, quantitative, and mixed Methods approaches (4. Edition). Thousand Oaks, CA: Sage Publications.
  • Curran, P. J., West, S. G., & Finch, J. F. (1996). The robustness of test statistics to nonnormality and specification error in confirmatory factor analysis. Psychological Methods, 1(1), 16–29. https://doi.org/10.1037/1082-989X.1.1.16
  • de Ayala, R. J. (2009). The theory and practice of item response theory. New York, NY: The Guilford Press. https://doi.org/10.1073/pnas.0703993104
  • DeMars, C. (2010). Item response theory. New York: Oxford University Press.
  • DiStefano, C., Zhu, M., & Mîndrilă, D. (2009). Understanding and using factor scores: Considerations for the applied researcher. Practical Assessment, Research & Evaluation, 14(20), 1–11.
  • Erkuş, A. (2014). Psikolojide ölçme ve ölçek geliştirme-I: Temel kavramlar ve işlemler (2. Baskı). Ankara: Pegem Akademi.
  • Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Pscyhological Methods, 4(3), 272–299. https://doi.org/10.1037/1082-989X.4.3.272
  • Fava, J. L., & Velicer, W. F. (1992). An empirical comparison of factor, image, component, and scale scores. Multivariate Behavioral Research, 27(3), 301–322. https://doi.org/10.1207/s15327906mbr2703_1
  • Finney, S. J., & DiStefano, C. (2013). Nonnormal and categorical data in structural equation modeling. In G. R. Hancock & R. O. Mueller (Eds.), Structural equation modeling: A second course (2nd ed., pp. 439–492). Charlotte, NC: IAP.
  • Floyd, F. J., & Widaman, K. F. (1995). Factor analysis in the development and refinement of clinical assessment instruments. Psychological Assessment, 7(3), 286–299.
  • Fraenkel, J. R., Wallen, N. E., & Huyn, H. H. (2012). How to design and evaluate research in education (8. Edition). New York: McGraw-Hill.
  • Gorsuch, R. L. (1974). Factor analysis (1st ed.). Toronto: W. B. Saunders Company.
  • Green, B. F. (1976). On the factor score controversy. Psychometrika, 41(2), 263–266. https://doi.org/10.1007/BF02291843
  • Grice, J. W. (2001). Computing and evaluating factor scores. Psychological Methods, 6(4), 430–450. https://doi.org/10.1037//1082-989X.6.4.430
  • Guadagnoli, E., & Velicer, W. F. (1988). Relation of sample size to the stability of component patterns. Psychological Bulletin, 103(2), 265–275.
  • Gulliksen, H. (1950). Theory of mental tests. New York: Wiley.
  • Hambleton, R. K., & Swaminathan, H. (1985). Item response theory: Principles and appliccations. New York: Springer Science & Business Media, LLC.
  • Hershberger, S. L. (2005). Factor score estimation. In B. S. Everitt & D. C. Howell (Eds.), Encyclopedia of Statistics in Behavioral Science (Vol. 2, pp. 636–644). Chichester, UK, UK: John Wiley & Sons, Ltd. https://doi.org/10.1002/0470013192.bsa726
  • Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30(2), 179–185. https://doi.org/10.1007/BF02289447
  • İlhan, M. (2016). Açık uçlu sorularda yapılan ölçmelerde Klasik Test Kuramı ve çok yüzeyli Rasch modeline göre hesaplanan yetenek kestirimlerinin karşılaştırılması. Hacettepe University Journal of Education, 31(2), 346- 368. https://doi.org/10.16986/HUJE.2016015182
  • Kaiser, H. F., & Rice, J. (1974). Little Jiffy, Mark IV. Educational and Psychological Measurement, 34(1), 111–117. https://doi.org/10.1177/001316447403400115
  • Kline, R. B. (2016). Principle and practice of structural equation modelling (4th ed.). New York, NY: The Guilford Press.
  • Leech, N. L., Barrett, K. C., & Morgan, G. A. (2015). IBM SPSS for intermediate statistics (5. Baskı). East Sussex: Routledge.
  • Lord, F. M. (1980). Applications of item response theory to practical testing problems. Journal of Chemical Information and Modeling (Vol. 53). New Jersey: Lawrence Erlbaum Associates, Inc. https://doi.org/10.1017/CBO9781107415324.004
  • Macdonald, P., & Paunonen, S. V. (2002). A monte carlo comparison of item and person statistics based on item response theory versus classical test theory. Educational and Psychological Measurement, 62(6), 921- 943. https://doi.org/10.1177/0013164402238082
  • Mardia, K. V. (1970). Measures of multivariate skewness and kurtosis with applications. Biometrika, 57(3), 519–530.
  • Mueller, R. O. (1996). Basic principles of structural equation modeling: An introduction to LISREL and EQS. Design (Vol. 102). New York: Springer Science & Business Media, LLC. https://doi.org/10.1016/j.peva.2007.06.006
  • Muthén, L. K., & Muthén, B. O. (2012). Mplus statistical modeling software: Release 7.0. Los Angeles, CA: Muthén & Muthén.
  • Partchev, I. (2016). irtoys: A collection of functions related to item response theory (IRT). Retrieved from https://cran.r-project.org/package=irtoys
  • Price, L. R. (2017). Psychometric methods: Theory and practice. New York, NY: The Guilford Press.
  • Progar, S., & Sočan, G. (2008). An empirical comparison of Item Response Theory and Classical Test Theory. Horizons of Psychology, 17(3), 5–24.
  • R Core Team. (2017). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from https://www.r-project.org/.
  • Revelle, W. (2018). psych: Procedures for Psychological, Psychometric, and Personality Research. Evanston, Illinois. Retrieved from https://cran.r-project.org/package=psych
  • Robitzsch, A. (2017). sirt: Supplementary item response theory models. Retrieved from https://cran.r-project.org/package=sirt
  • Stage, C. (1998a). A comparison between item analysis based on item response theory and classical test theory. A study of the SweSAT Subtest ERC. Educational Measurement. Retrieved from http://www.sprak.umu.se/digitalAssets/59/59551_enr2998sec.pdf
  • Stage, C. (1998b). A comparison between item analysis based on item response theory and classical test theory. A study of the SweSAT Subtest WORD. Educational Measurement. Retrieved from http://www.sprak.umu.se/digitalAssets/59/59551_enr2998sec.pdf
  • Streiner, D. L. (1994). Figuring out factors: The use and misuse of factor analysis. Canadian Journal of Psychiatry, 39(3), 135–140.
  • Tabachnik, B. G., & Fidell, L. S. (2012). Using multivariate statistics (6. ed.). Boston: Pearson.
  • Velicer, W. F. (1976). The relation between factor score estimates, image scores, and principal component scores. Educational and Psychological Measurement, 36(1), 149–159. https://doi.org/10.1177/001316447603600114
  • Wickham, H. (2016). ggplot2: Elegant graphics for data analysis. Springer-Verlag New York. Retrieved from http://ggplot2.org
  • Williams, J. S. (1978). A definition for the common-factor analysis model and the elimination of problems of factor score indeterminacy. Psychometrika, 43(3), 293–306. https://doi.org/10.1007/BF02293640
  • Xu, T., & Stone, C. A. (2012). Using IRT trait estimates versus summated scores in predicting outcomes. Educational and Psychological Measurement, 72(3), 453–468. https://doi.org/10.1177/0013164411419846
  • Yen, W. M. (1984). Effects of local item dependence on the fit and equating performance of the three-parameter logistic model. Applied Psychological Measurement, 8(2), 125–145. https://doi.org/10.1177/014662168400800201

Can Factor Scores be Used Instead of Total Score and Ability Estimation?

Yıl 2019, Cilt: 6 Sayı: 1, 25 - 35, 21.03.2019
https://doi.org/10.21449/ijate.442542

Öz

The purpose of this study is to investigate whether factor scores can be used instead of ability estimation and total score. For this purpose, the relationships among total score, ability estimation, and factor scores were investigated. In the research, Turkish subtest data from the Transition from Primary to Secondary Education (TEOG) exam applied in April 2014 were used. Total scores in this study were calculated from the total number of correct answers given by individuals to each item. Ability estimations were obtained from a three-parameter logistic model chosen from among item response theory (IRT) models. The Bartlett method was used for factor score estimation. Thus, the ability estimation, sum, and factor scores of each individual were obtained. When the relationship between these variables was investigated, it was observed that there was a high-level, positive, and statistically significant relationship. In the result section of this study, as variables have a high-level relationship, it was suggested that since variables could be used interchangeably, factor scores should be used. Although the total scores of individuals were equal, there were differences in terms of factor score and ability estimations. Therefore, it was suggested that item response theory assumptions were not met, or factor scores should be used when the sample size is small.

Kaynakça

  • Akyıldız, M., & Şahin, M. D. (2017). Açıköğretimde kullanılan sınavlardan Klasik Test Kuramına ve Madde Tepki Kuramına göre elde edilen yetenek ölçülerinin karşılaştırılması. Açıköğretim Uygulamaları ve Araştırmaları Dergisi, 3(4), 141–159.
  • Anderson, T. W., & Rubin, H. (1956). Statistical inference in factor analysis. In J. Neyman (Ed.), Proceedings of the Third Berkeley Symposium of Mathematical Statistics and Probability (Vol. 5, pp. 111–150). Berkeley: University of California Press.
  • Bulut, G. (2018). Açık ve uzaktan öğrenmede şans başarısı : Klasik Test Kuramı (KTK) ve Madde Tepki Kurama (MTK) temelinde karşılaştırmalı bir analiz. Açıköğretim Uygulamaları ve Araştırmaları Dergisi, 4(1), 78–93.
  • Büyüköztürk, Ş., Kılıç-Çakmak, E., Akgün, Ö. E., Karadeniz, Ş., & Demirel, F. (2013). Bilimsel araştırma yöntemleri (14. Baskı). Ankara: Pegem Akademi.
  • Byrne, B. M. (2016). Structural equation modeling with AMOS: Basic concepts, applications, and programming (3rd ed.). Routledge.
  • Çakıcı-Eser, D. (2013). PISA 2009 okuma testinden elde edilen iki kategorili verilerin BILOG programı ile incelenmesi. Eğitim ve Öğretim Araştırmaları Dergisi, 2(4), 135–144.
  • Cappelleri, J. C., Jason Lundy, J., & Hays, R. D. (2014). Overview of Classical Test Theory and Item Response Theory for the quantitative assessment of items in developing patient-reported outcomes measures. Clinical Therapeutics, 36(5), 648–662. https://doi.org/10.1016/j.clinthera.2014.04.006
  • Çelen, Ü., & Aybek, E. C. (2013). Öğrenci başarısının öğretmen yapımı bir testle klasik test kuramı ve madde tepki kuramı yöntemleriyle elde edilen puanlara göre karşılaştırılması. Eğitimde ve Psikolojide Ölçme ve Değerlendirme Dergisi, 4(2), 64–75. Retrieved from http://dergipark.ulakbim.gov.tr/epod/article/view/5000045503
  • Chou, C. P., & Bentler, P. M. (1995). Estimates and tests in structural equation modeling. In R. H. Hoyle (Ed.), Structural equation modeling: Concepts, issues, and applications. Thousand Oaks, CA: SAGE.
  • Comrey, A. L. (1988). Factor-analytic methods of scale development in personality and clinical psychology. Journal of Consulting and Clinical Psychology, 56(5), 754–761. https://doi.org/10.1037/0022-006X.56.5.754
  • Comrey, A. L., & Lee, H. B. (1992). A first course in factor analysis (2nd ed.). London: Lawrence Erlbaum Associates, Inc. https://doi.org/10.1017/CBO9781107415324.004
  • Costello, A. B., & Osborne, J. W. (2005). Best practices in explatory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research & Evaluation, 10(7), 27–29. https://doi.org/10.1.1.110.9154
  • Creswell, J. W. (2013). Research design: Qualitative, quantitative, and mixed Methods approaches (4. Edition). Thousand Oaks, CA: Sage Publications.
  • Curran, P. J., West, S. G., & Finch, J. F. (1996). The robustness of test statistics to nonnormality and specification error in confirmatory factor analysis. Psychological Methods, 1(1), 16–29. https://doi.org/10.1037/1082-989X.1.1.16
  • de Ayala, R. J. (2009). The theory and practice of item response theory. New York, NY: The Guilford Press. https://doi.org/10.1073/pnas.0703993104
  • DeMars, C. (2010). Item response theory. New York: Oxford University Press.
  • DiStefano, C., Zhu, M., & Mîndrilă, D. (2009). Understanding and using factor scores: Considerations for the applied researcher. Practical Assessment, Research & Evaluation, 14(20), 1–11.
  • Erkuş, A. (2014). Psikolojide ölçme ve ölçek geliştirme-I: Temel kavramlar ve işlemler (2. Baskı). Ankara: Pegem Akademi.
  • Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Pscyhological Methods, 4(3), 272–299. https://doi.org/10.1037/1082-989X.4.3.272
  • Fava, J. L., & Velicer, W. F. (1992). An empirical comparison of factor, image, component, and scale scores. Multivariate Behavioral Research, 27(3), 301–322. https://doi.org/10.1207/s15327906mbr2703_1
  • Finney, S. J., & DiStefano, C. (2013). Nonnormal and categorical data in structural equation modeling. In G. R. Hancock & R. O. Mueller (Eds.), Structural equation modeling: A second course (2nd ed., pp. 439–492). Charlotte, NC: IAP.
  • Floyd, F. J., & Widaman, K. F. (1995). Factor analysis in the development and refinement of clinical assessment instruments. Psychological Assessment, 7(3), 286–299.
  • Fraenkel, J. R., Wallen, N. E., & Huyn, H. H. (2012). How to design and evaluate research in education (8. Edition). New York: McGraw-Hill.
  • Gorsuch, R. L. (1974). Factor analysis (1st ed.). Toronto: W. B. Saunders Company.
  • Green, B. F. (1976). On the factor score controversy. Psychometrika, 41(2), 263–266. https://doi.org/10.1007/BF02291843
  • Grice, J. W. (2001). Computing and evaluating factor scores. Psychological Methods, 6(4), 430–450. https://doi.org/10.1037//1082-989X.6.4.430
  • Guadagnoli, E., & Velicer, W. F. (1988). Relation of sample size to the stability of component patterns. Psychological Bulletin, 103(2), 265–275.
  • Gulliksen, H. (1950). Theory of mental tests. New York: Wiley.
  • Hambleton, R. K., & Swaminathan, H. (1985). Item response theory: Principles and appliccations. New York: Springer Science & Business Media, LLC.
  • Hershberger, S. L. (2005). Factor score estimation. In B. S. Everitt & D. C. Howell (Eds.), Encyclopedia of Statistics in Behavioral Science (Vol. 2, pp. 636–644). Chichester, UK, UK: John Wiley & Sons, Ltd. https://doi.org/10.1002/0470013192.bsa726
  • Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30(2), 179–185. https://doi.org/10.1007/BF02289447
  • İlhan, M. (2016). Açık uçlu sorularda yapılan ölçmelerde Klasik Test Kuramı ve çok yüzeyli Rasch modeline göre hesaplanan yetenek kestirimlerinin karşılaştırılması. Hacettepe University Journal of Education, 31(2), 346- 368. https://doi.org/10.16986/HUJE.2016015182
  • Kaiser, H. F., & Rice, J. (1974). Little Jiffy, Mark IV. Educational and Psychological Measurement, 34(1), 111–117. https://doi.org/10.1177/001316447403400115
  • Kline, R. B. (2016). Principle and practice of structural equation modelling (4th ed.). New York, NY: The Guilford Press.
  • Leech, N. L., Barrett, K. C., & Morgan, G. A. (2015). IBM SPSS for intermediate statistics (5. Baskı). East Sussex: Routledge.
  • Lord, F. M. (1980). Applications of item response theory to practical testing problems. Journal of Chemical Information and Modeling (Vol. 53). New Jersey: Lawrence Erlbaum Associates, Inc. https://doi.org/10.1017/CBO9781107415324.004
  • Macdonald, P., & Paunonen, S. V. (2002). A monte carlo comparison of item and person statistics based on item response theory versus classical test theory. Educational and Psychological Measurement, 62(6), 921- 943. https://doi.org/10.1177/0013164402238082
  • Mardia, K. V. (1970). Measures of multivariate skewness and kurtosis with applications. Biometrika, 57(3), 519–530.
  • Mueller, R. O. (1996). Basic principles of structural equation modeling: An introduction to LISREL and EQS. Design (Vol. 102). New York: Springer Science & Business Media, LLC. https://doi.org/10.1016/j.peva.2007.06.006
  • Muthén, L. K., & Muthén, B. O. (2012). Mplus statistical modeling software: Release 7.0. Los Angeles, CA: Muthén & Muthén.
  • Partchev, I. (2016). irtoys: A collection of functions related to item response theory (IRT). Retrieved from https://cran.r-project.org/package=irtoys
  • Price, L. R. (2017). Psychometric methods: Theory and practice. New York, NY: The Guilford Press.
  • Progar, S., & Sočan, G. (2008). An empirical comparison of Item Response Theory and Classical Test Theory. Horizons of Psychology, 17(3), 5–24.
  • R Core Team. (2017). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from https://www.r-project.org/.
  • Revelle, W. (2018). psych: Procedures for Psychological, Psychometric, and Personality Research. Evanston, Illinois. Retrieved from https://cran.r-project.org/package=psych
  • Robitzsch, A. (2017). sirt: Supplementary item response theory models. Retrieved from https://cran.r-project.org/package=sirt
  • Stage, C. (1998a). A comparison between item analysis based on item response theory and classical test theory. A study of the SweSAT Subtest ERC. Educational Measurement. Retrieved from http://www.sprak.umu.se/digitalAssets/59/59551_enr2998sec.pdf
  • Stage, C. (1998b). A comparison between item analysis based on item response theory and classical test theory. A study of the SweSAT Subtest WORD. Educational Measurement. Retrieved from http://www.sprak.umu.se/digitalAssets/59/59551_enr2998sec.pdf
  • Streiner, D. L. (1994). Figuring out factors: The use and misuse of factor analysis. Canadian Journal of Psychiatry, 39(3), 135–140.
  • Tabachnik, B. G., & Fidell, L. S. (2012). Using multivariate statistics (6. ed.). Boston: Pearson.
  • Velicer, W. F. (1976). The relation between factor score estimates, image scores, and principal component scores. Educational and Psychological Measurement, 36(1), 149–159. https://doi.org/10.1177/001316447603600114
  • Wickham, H. (2016). ggplot2: Elegant graphics for data analysis. Springer-Verlag New York. Retrieved from http://ggplot2.org
  • Williams, J. S. (1978). A definition for the common-factor analysis model and the elimination of problems of factor score indeterminacy. Psychometrika, 43(3), 293–306. https://doi.org/10.1007/BF02293640
  • Xu, T., & Stone, C. A. (2012). Using IRT trait estimates versus summated scores in predicting outcomes. Educational and Psychological Measurement, 72(3), 453–468. https://doi.org/10.1177/0013164411419846
  • Yen, W. M. (1984). Effects of local item dependence on the fit and equating performance of the three-parameter logistic model. Applied Psychological Measurement, 8(2), 125–145. https://doi.org/10.1177/014662168400800201
Toplam 55 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Eğitim Üzerine Çalışmalar
Bölüm Makaleler
Yazarlar

Abdullah Faruk Kılıç 0000-0003-3129-1763

Yayımlanma Tarihi 21 Mart 2019
Gönderilme Tarihi 11 Temmuz 2018
Yayımlandığı Sayı Yıl 2019 Cilt: 6 Sayı: 1

Kaynak Göster

APA Kılıç, A. F. (2019). Can Factor Scores be Used Instead of Total Score and Ability Estimation?. International Journal of Assessment Tools in Education, 6(1), 25-35. https://doi.org/10.21449/ijate.442542

23824         23823             23825