Year 2019, Volume 8, Issue 3, Pages 160 - 179 2019-07-31

Faktör sayısını belirleme yöntemlerinin karşılaştırılması: Bir simülasyon çalışması
Comparison of factor retention methods on binary data: A simulation study

Abdullah Faruk Kılıç [1] , İbrahim Uysal [2]

113 125

Bu araştırmada faktör sayının belirlenmesi amacıyla geliştirilen yöntemlerin simülasyon koşulları altında karşılaştırılması amaçlanmıştır. Bu amaç için faktör sayısı (1, 2 [basit]), örneklem büyüklüğü (250, 1000 ve 3000), madde sayısı (20, 30), ortalama faktör yükü (0.50, 0.70) ve kullanılan korelasyon matrisi (Pearson Momentler Çarpımı [PPM] ve Tetrakorik) simülasyon koşulu olarak araştırılmıştır. Her bir koşul için 1000 replikasyon yapılmış ve üretilen 24000 veri seti için PPM ve tetrakorik korelasyon matrisi üzerinden analizler gerçekleştirilmiştir. Araştırma kapsamında Paralel Analiz, Kısmi Korelasyonların En Küçüğü, DETECT, Optimal Koordinat ve İvmelenme Faktörü yöntemlerinin performansları doğru kestirim yüzdesi ve ortalama fark değerleri üzerinden karşılaştırılmıştır. Araştırma sonucunda hem tetrakorik hem de PPM korelasyon matrisiyle yürütülen MAP analizi en iyi performansı göstermiştir. PA da PPM korelasyon matrisiyle iyi performans göstermiş ancak küçük örneklemde tetrakorik korelasyon matrisiyle performansı düşmüştür. DETECT yöntemi tek boyutlu yapılarda örneklem büyüklüğü ve ortalama faktör yükünden etkilenmiştir.

In this study, the purpose is to compare factor retention methods under simulation conditions. For this purpose, simulations conditions with a number of factors (1, 2 [simple]), sample sizes (250, 1.000, and 3.000), number of items (20, 30), average factor loading (0.50, 0.70), and correlation matrix (Pearson Product Moment [PPM] and Tetrachoric) were investigated. For each condition, 1.000 replications were conducted. Under the scope of this research, performances of the Parallel Analysis, Minimum Average Partial, DETECT, Optimal Coordinate, and Acceleration Factor methods were compared by means of the percentage of correct estimates, and mean difference values. The results of this study indicated that MAP analysis, as applied to both tetrachoric and PPM correlation matrices, demonstrated the best performance. PA showed a good performance with the PPM correlation matrix, however, in smaller samples, the performance of the tetrachoric correlation matrix decreased. The Acceleration Factor method proposed one factor for all simulation conditions. For unidimensional constructs, the  DETECT method was affected by both the sample size and average factor loading.

  • Beauducel, A., & Herzberg, P. Y. (2006). On the performance of maximum likelihood versus means and variance adjusted weighted least squares estimation in CFA. Structural Equation Modeling: A Multidisciplinary Journal, 13(2), 186–203. https://doi.org/10.1207/s15328007sem1302_2
  • Bennett, R. E., Rock, D. A., Braun, H. I., Frye, D., Spohrer, J. C., & Soloway, E. (1990). The relationship of expert-system scored constrained free-response items to multiple-choice and open-ended items. Applied Psychological Measurement, 14(2), 151–162. https://doi.org/10.1177/014662169001400204
  • Bennett, R. E., Rock, D. A., & Wang, M. (1991). Equivalence of free-response and multiple-choice items. Journal of Educational Measurement, 28(1), 77–92. https://doi.org/10.1111/j.1745-3984.1991.tb00345.x
  • Buja, A., & Eyuboglu, N. (1992). Remarks on parallel analysis. Multivariate Behavioral Research, 27(4), 509–540. https://doi.org/10.1207/s15327906mbr2704_2
  • Cattell, R. B. (1966). The scree test for the number of factors. Multivariate Behavioral Research, 1(2), 245–276. https://doi.org/10.1207/s15327906mbr0102_10
  • Cho, S.-J., Li, F., & Bandalos, D. L. (2009). Accuracy of the parallel analysis procedure with polychoric correlations. Educational and Psychological Measurement, 69(5), 748–759. https://doi.org/10.1177/0013164409332229
  • Comrey, A. L., & Lee, H. B. (1992). A first course in factor analysis (2nd ed.). London: Lawrence Erlbaum Associates.
  • Conway, J. M., & Huffcutt, A. I. (2003). A review and evaluation of exploratory factor analysis practices in organizational research. Organizational Research Methods, 6(2), 147–168. https://doi.org/10.1177/1094428103251541
  • Cota, A. A., Longman, R. S., Holden, R. R., Fekken, G. C., & Xinaris, S. (1993). Interpolating 95th percentile eigenvalues from random data: An empirical example. Educational and Psychological Measurement, 53(3), 585–596. https://doi.org/10.1177/0013164493053003001
  • DeMars, C. (2010). Item response theory. New York: Oxford University.
  • Dinno, A. (2014). Gently clarifying the application of Horn’s parallel analysis to principal component analysis versus factor analysis. Unpublished manuscript, School of Community Health, Portland State University, Oregon, USA. Retrieved from http://doyenne.com/Software/files/ PA_for_PCA_vs_FA.pdf
  • Fabrigar, L. R., & Wegener, D. T. (2012). Exploratory factor analysis. New York: Oxford University.
  • Feinberg, R. A., & Rubright, J. D. (2016). Conducting simulation studies in psychometrics. Educational Measurement: Issues and Practice, 35(2), 36–49. https://doi.org/10.1111/emip.12111
  • Finch, H., & Habing, B. (2005). Comparison of NOHARM and DETECT in item cluster recovery: Counting dimensions and allocating items. Journal of Educational Measurement, 42(2), 149–169. https://doi.org/10.1111/j.1745-3984.2005.00008
  • Finch, H., & Habing, B. (2007). Performance of DIMTEST- and NOHARM- based statistics for testing unidimensionality. Applied Psychological Measurement, 31(4), 292–307. https://doi.org/10.1177/0146621606294490
  • Floyd, F. J., & Widaman, K. F. (1995). Factor analysis in the development and refinement of clinical assessment instruments. Psychological Assessment, 7(3), 286–299. https://doi.org/10.1037/1040-3590.7.3.286
  • Fraser, C., & McDonald, R. P. (1988). NOHARM: Least squares item factor analysis. Multivariate Behavioral Research, 23(2), 267–269. https://doi.org/10.1207/s15327906mbr2302_9
  • Garrido, L. E., Abad, F. J., & Ponsoda, V. (2011). Performance of Velicer’s minimum average partial factor retention method with categorical variables. Educational and Psychological Measurement, 71(3), 551–570. https://doi.org/10.1177/0013164410389489
  • Gilbert, N. (1999). Simulation: A new way of doing social science. American Behavioral Scientist, 42(10), 1485–1487. https://doi.org/10.1177/0002764299042010002
  • Glorfeld, L. W. (1995). An improvement on Horn’s parallel analysis methodology for selecting the correct number of factors to retain. Educational and Psychological Measurement, 55(3), 377–393. https://doi.org/10.1177/0013164495055003002
  • Gorsuch, R. L. (1974). Factor analysis. Toronto: W. B. Saunders Company.
  • Green, S. B., Levy, R., Thompson, M. S., Lu, M., & Lo, W.-J. (2012). A proposed solution to the problem with using completely random data to assess the number of factors with parallel analysis. Educational and Psychological Measurement, 72(3), 357–374. https://doi.org/10.1177/0013164411422252
  • Green, S. B., Redell, N., Thompson, M. S., & Levy, R. (2016). Accuracy of revised and traditional parallel analyses for assessing dimensionality with binary data. Educational and Psychological Measurement, 76(1), 5–21. https://doi.org/10.1177/0013164415581898
  • Green, S. B., Thompson, M. S., Levy, R., & Lo, W.-J. (2015). Type I and Type II error rates and overall accuracy of the revised parallel analysis method for determining the number of factors. Educational and Psychological Measurement , 75(3), 428-457. https://doi.org/10.1177/0013164414546566
  • Guadagnoli, E., & Velicer, W. F. (1988). Relation of sample size to the stability of component patterns. Psychological Bulletin, 103(2), 265–275.
  • Guilford, J. P. (1952). When not to factor analyze. Psychological Bulletin, 49(1), 26–37. https://doi.org/10.1037/h0054935
  • Harwell, M., Stone, C. A., Hsu, T.-C., & Kirisci, L. (1996). Monte carlo studies in item response theory. Applied Psychological Measurement, 20(2), 101–125. https://doi.org/10.1177/014662169602000201
  • Henson, R. K., & Roberts, J. K. (2006). Use of exploratory factor analysis in published research. Educational and Psychological Measurement, 66(3), 393–416. https://doi.org/10.1177/0013164405282485
  • Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30(2), 179–185. https://doi.org/10.1007/BF02289447
  • Hu, L., Bentler, P. M., & Kano, Y. (1992). Can test statistics in covariance structure analysis be trusted? Psychological Bulletin, 112(2), 351–362. https://doi.org/10.1037/0033-2909.112.2.351
  • Jang, E. E., & Roussos, L. (2007). An investigation into the dimensionality of TOEFL using conditional covariance-based nonparametric approach. Journal of Educational Measurement, 44(1), 1–21. https://doi.org/10.1111/j.1745-3984.2007.00024.x
  • Kaiser, H. F. (1960). The application of electronic computers to factor analysis. Educational and Psychological Measurement, 20(1), 141–151. https://doi.org/10.1177/001316446002000116
  • Kaya Kalkan, Ö., & Kelecioğlu, H. (2016). The effect of sample size on parametric and nonparametric factor analytical methods. Educational Sciences: Theory & Practice, 16(1), 153–171. https://doi.org/10.12738/estp.2016.1.0220
  • Kim, H. R. (1996). A new index of dimensionality-DETECT. The Pure and Applied Mathematics, 3(2), 141–154.
  • Ledesma, R. D., & Valero-Mora, P. (2007). Determining the number of factors to retain in EFA: An easy-to-use computer program for carrying out parallel analysis. Practical Assessment, Research & Evaluation, 12(2), 2–11.
  • Lissitz, R. W., Hou, X., & Slater, S. C. (2012). The contribution of constructed response items to large scale assessment: Measuring and understanding their impact. Journal of Applied Testing Technology, 13(3), 1–50. Retrieved from http://www.jattjournal.com/index.php/atp/article/view/48366/39234
  • MEB. (2019). Sınavla öğrenci alacak ortaöğretim kurumlarına ilişkin merkezî sınav başvuru ve uygulama kılavuzu. Retrieved from https://www.meb.gov.tr/meb_iys_dosyalar/2019_04/03134315_Kilavuz2019.pdf
  • Nandakumar, R., & Stout, W. (1993). Refinements of Stout’s procedure for assessing latent trait unidimensionality. Journal of Educational Statistics, 18(1), 41-68. https://doi.org/10.2307/1165182
  • O’connor, B. P. (2000). SPSS and SAS programs for determining the number of components using parallel analysis and Velicer’s MAP test. Behavior Research Methods, Instruments, & Computers, 32(3), 396–402. https://doi.org/10.3758/BF03200807
  • Osborne, J. W., & Banjanovic, E. S. (2016). Exploratory factor analysis with SAS®. Cary, NC: SAS Intitute Inc.
  • R Core Team. (2018). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. Retrieved from https://www.r-project.org/.
  • Raiche, G. (2010). nFactors: An R package for parallel analysis and non graphical solutions to the Cattell scree test. Retrived from https://cran.r-project.org/web/packages/nFactors/nFactors.pdf.
  • Raîche, G., Walls, T. A., Magis, D., Riopel, M., & Blais, J.-G. (2013). Non-Graphical solutions for Cattell’s scree test. Methodology, 9(1), 23–29. https://doi.org/10.1027/1614-2241/a000051
  • Revelle, W. (2016). psych: Procedures for psychological, psychometric, and personality research. Evanston, Illinois. Retrieved from https://cran.r-project.org/package=psych
  • Robey, R. R., & Barcikowski, R. S. (1992). Type I error and the number of iterations in monte carlo studies of robustness. British Journal of Mathematical and Statistical Psychology, 45, 283–288.
  • Robitzsch, A. (2017). sirt: Supplementary item response theory models. Retrieved from https://cran.r-project.org/package=sirt
  • Stout, W. (1987). A nonparametric approach for assessing latent trait unidimensionality. Psychometrika, 52(4), 589–617. https://doi.org/10.1007/BF02294821
  • Streiner, D. L. (1994). Figuring out factors: The use and misuse of factor analysis. Canadian Journal of Psychiatry, 39(3), 135–140.
  • Tabachnik, B. G., & Fidell, L. S. (2012). Using multivariate statistics (6th ed.). Boston: Pearson.
  • van Abswoude, A. A. H., van der Ark, L. A., & Sijtsma, K. (2004). A comparative study of test data dimensionality assessment procedures under nonparametric IRT models. Applied Psychological Measurement, 28(1), 3–24. https://doi.org/10.1177/0146621603259277
  • van den Bergh, H. (1990). On the construct validity of multiple- choice items for reading comprehension. Applied Psychological Measurement, 14(1), 1–12. https://doi.org/10.1177/014662169001400101
  • Velicer, W. F. (1976). Determining the number of components from the matrix of partial correlations. Psychometrika, 41(3), 321–327.
  • Weng, L.-J., & Cheng, C.-P. (2005). Parallel analysis with unidimensional binary data. Educational and Psychological Measurement, 65(5), 697–716. https://doi.org/10.1177/0013164404273941
  • Xia, Y., Green, S. B., Xu, Y., & Thompson, M. S. (2018). Proportion of indicator common variance due to a factor as an effect size statistic in revised parallel analysis. Educational and Psychological Measurement. 79(1), 75-107. https://doi.org/10.1177/0013164418754611
  • Yang, Y., & Xia, Y. (2015). On the number of factors to retain in exploratory factor analysis for ordered categorical data. Behavior Research Methods, 47(3), 756–772. https://doi.org/10.3758/s13428-014-0499-2
  • Zhang, J. (2007). Conditional covariance theory and DETECT for polytomous items. Psychometrika, 72(1), 69–91. https://doi.org/10.1007/s11336-004-1257-7
  • Zhang, J., & Stout, W. (1999). The theoretical detect index of dimensionality and its application to approximate simple structure. Psychometrika, 64(2), 213–249. https://doi.org/10.1007/BF02294536
  • Zhang, Y. O., Yu, F., & Nandakumar, R. (2003). The impact of conditional scores on the performance of DETECT. Paper presented at the annual meeting of the National Council on Measurement in Education. Chicago, IL. Retrieved from https://files.eric.ed.gov/fulltext/ED478170.pdf
  • Zoski, K., & Jurs, S. (1993). Using multiple regression to determine the number of factors to retain in factor analysis. Multiple Linear Regression Viewpoint, 20(1), 5–9. Retrieved from http://www.glmj.org/archives/MLRV_1993_20_1.pdf
  • Zoski, K., & Jurs, S. (1996). An objective counterpart to the visual scree test for factor analysis: The standard error scree. Educational and Psychological Measurement, 56(3), 443–451. https://doi.org/10.1177/0013164496056003006
  • Zwick, W. R., & Velicer, W. F. (1986). Comparison of five rules for determining the number of components to retain. Psychological Bulletin, 99(3), 432–442. https://doi.org/10.1037/0033-2909.99.3.432
Primary Language en
Subjects Education and Educational Research
Journal Section Research Articles
Authors

Orcid: 0000-0003-3129-1763
Author: Abdullah Faruk Kılıç (Primary Author)
Institution: Hacettepe University
Country: Turkey


Orcid: 0000-0002-6767-0362
Author: İbrahim Uysal
Institution: BOLU ABANT IZZET BAYSAL UNIVERSITY
Country: Turkey


Dates

Publication Date: July 31, 2019

APA Kılıç, A , Uysal, İ . (2019). Comparison of factor retention methods on binary data: A simulation study. Turkish Journal of Education, 8 (3), 160-179. DOI: 10.19128/turje.518636