Research Article
BibTex RIS Cite

The Effect of Sample Size on Full MIRT Equating under Random Group Design

Year 2024, Issue: 71, 53 - 82, 31.07.2024
https://doi.org/10.21764/maeuefd.1361350

Abstract

The purpose of this study is to determine the effect of different sample size levels on the Full MIRT observed score equating method under the random group design contingent upon the number of items present in the dimensions. The study utilized simulated data sets generated under the random group design condition. To examine a wide range of sample size, the sample size (N) was incremented by 500 from 500 to 8,000. The standard error of equating (SEE), bias (BIAS), and root mean square error (RMSE) values of equating were examined for 16 sample size levels and 3 different numbers of items in dimensions. According to the results of the study, SEE and RMSE values increase towards outlier values. BIAS values are negative for scores below the mean raw score and positive for scores above the mean raw score. It was observed that as the sample size increases, SEE, BIAS and RMSE values decrease. When the sample sizes are 4000 or above, there is no significant change in error values. The study concluded that sample size of 4000 is sufficient under the item number condition in the dimensions for the Full MIRT observed score equating method under the random group design.

References

  • Albano, A. D. (2016). equate: An R package for observed-score linking and equating. Journal of Statistical Software, 74(8), 1-36. https://doi.org/10.18637/jss.v074.i08
  • Asiret, S., & Sünbül, S. Ö. (2016). Investigating test equating methods in small samples through various factors. Educational Sciences: Theory and Practice, 16(2), 647-668.
  • Atar, B., & Yeşiltaş, G. (2017). Çok boyutlu eşitleme yöntemlerinin eşdeğer olmayan gruplarda ortak madde deseni için performanslarının incelenmesi. Eğitimde ve Psikolojide Ölçme ve Değerlendirme Dergisi, 8(4), 421-434.
  • Baldwin, P. (2006, April). A modified IRT model intended to improve parameter estimates under small sample conditions. Presented at the National Council on Measurement in Education, San Francisco, USA.
  • Barnes, L. L. B., & Wise, S. L. (1991). The utility of a modified one-parameter IRT model with small samples. Applied Measurement in Education, 4(2), 143–157.
  • Bolt, D. M. (1999). Evaluating the effects of multidimensionality on IRT true-score equating. Applied Measurement in Education, 12(4), 383-407.
  • Brossman, B. G. (2010). Observed score and true score equating procedures for multidimensional item response theory [Doktora Tezi, Iowa Üniversite].
  • Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling. A Multidisciplinary Journal, 14, 464–504. https://doi.org/10.1080/10705510701301834
  • Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9, 233–255.
  • Choi, J. (2019). Comparison of mirt observed score equating methods under the common-item nonequivalent groups design [Doktora Tezi, Iowa Üniversite].
  • Cui, Z., & Kolen, M. J. (2009). Evaluation of two new smoothing methods in equating: The cubic b-spline presmoothing method and the direct presmoothing method. Journal of Educational Measurement, 46(2), 135–158.
  • Çokluk, Ö., Uçar, A., & Balta, E. (2022). Madde tepki kuramına dayalı gerçek puan eşitlemede ölçek dönüştürme yöntemlerinin i̇ncelenmesi. Ankara Universitesi Egitim Bilimleri Fakultesi Dergisi. https://doi.org/10.30964/auebfd.1001128
  • Finch, H. (2006). Comparison of the performance of varimax and promax rotations: Factor structure recovery for dichotomous items. Journal of Educational Measurement, 43, 39-52.
  • Gök, B. & Kelecioğlu, H. (2014). Comparison of irt equating methods using the common-item nonequivalent groups design. Mersin Üniversitesi Eğitim Fakültesi Dergisi, 10(1), 120-136.
  • Hambleton, R. K., & Cook, L. L. (1983). Robustness of item response models and effects of test length and sample size on the precision of ability estimates. In D. J. Weiss (Ed.), New horizons in testing (pp. 31–49). New York: Academic.
  • Harwell, M. R., & Janosky, J. E. (1991). An empirical study of the effects of small datasets and varying prior variances on item parameter estimation in BILOG. Applied Psychological Measurement, 15(3), 279–291.
  • Karagül, A. E. (2020). Küçük örneklemlerde çok kategorili puanlanan maddelerden oluşan testlerde klasik test eşitleme yöntemlerinin karşılaştırılması [Yüksek Lisans Tezi, Ankara Üniversitesi]. Yöktez.
  • Kilmen, S. (2010). Madde tepki kuramı’na dayalı test eşitleme yöntemlerinden kestirilen eşitleme hatalarının örneklem büyüklüğü ve yetenek dağılımına göre karşılaştırılması [Doktora Tezi, Ankara Üniversitesi]. Yöktez.
  • Kilmen, S., & Demirtaşlı, N. (2012). Comparison of test equating methods based on item response theory according to the sample size and ability distribution. Procedia - Social and Behavioral Sciences, 46, 130-134. doi: 10.1016/j.sbspro.2012.05.081
  • Kim, K. Y. (2022). Item response theory true score equating for the bifactor model under the common-item nonequivalent groups design. Psychological Measurement 46(6). 479-493.
  • Kim, S. Y., Lee, W., & Kolen, M. J. (2019). Simple-structure multidimensional item response theory equating for multidimensional test. Educational and Psychological Measurement, 80(1). 91-125.
  • Kolen, M. J., & Brennan, R. L. (2014). Test equating, scaling, and linking: Methods and practices (3). New York, NY: Springer.
  • Kumlu, G. (2019). Test ve alt testlerde eşitlemenin farklı koşullar açısından incelenmesi (Doktora Tezi). Hacettepe Üniversitesi, Ankara.
  • Lee, E. (2013). Equating multidimensional test under a random groups design: A comparison of various equating procedures (Doctoral Dissertation). University of Iowa, USA.
  • Lee, E., Lee, W. C., & Brennan, R. L. (2014). Equating multidimensional tests under a random groups design: A comparison of various equating procedures. (CASMA Research Report No. 40). Center for Advanced Studies in Measurement and Assessment, The University of Iowa
  • Lee, G., Lee, W., Kolen, M. J., Park, I. Y., Kim, D. I., & Yang, J. S. (2015). Bi-factor mirt true-score equating for testlet-based tests. Journal of Educational Evaluation, 28, 681-700.
  • Lee, G., & Lee, W. (2016). Bi-factor mirt observed-score equating for mixed-format tests. Applied Measurement in Education, 29, 224-241.
  • Lee, W., & Brossman, B. G. (2012). Observed score equating for mixed-format tests using a simple-structure multidimensional IRT framework. M. J. Kolen ve W. Lee (Ed.), Mixed-format tests: Psychometric properties with a primary focus on equating (vol 2.2, s. 115-142) içinde. Center for Advanced Studies in Measurement and Assessment.
  • Li, Y. H., & Lissitz, R. W. (2000). An evaluation of the accuracy of multidimensional IRT linking. Applied Psychological Measurement, 24(2), 115-138.
  • Linacre, J. M. (1994). Sample size and item calibration [or person measure] stability. Rasch Measurement Transactions, 7(4), 328.
  • Liu, C., & Kolen, M. J. (2011). Automated selection of smoothing parameters in equipercentile equating. In M. J. Kolen & W. Lee (Eds.), Mixed-format tests: Psychometric properties with a primary focus on equating (Volume 1). (CASMA Monograph Number 2.1) (pp. 237–261). Iowa City, IA: CASMA, The University of Iowa.
  • Livingston, S. A. (1993). Small-sample equating with log-linear smoothing. Journal of Educational Measurement, 30(1), 23–39.
  • Livingston, S. A., & Kim, S. (2010). Ransom group equating with samples of 50 to 400 test takers. Journal of Educational Measurement, 47(2), 175-185.
  • Lord, F. M., & Wingersky, M. S. (1984). Comparison of IRT true-score and equipercentile observed-score “equatings.” Applied Psychological Measurement, 8, 452-461.
  • McDonald, R. P. (2000). A basis for multidimensional item response theory. Applied Psychological Measurement, 24, 99-114.
  • Morris, T. P., White, I. R., & Crowther, M. J. (2019). Using simulation studies to evaluate statistical methods. Statistics in Medicine, 1-29. doi:10.1002/sim.8086
  • Pak, S., & Lee, W. C. (2014). An investigation of performance of equating for mixed-format tests using only multiple-choice common items. In M. J. Kolen & W. Lee (Eds.), Mixed-format tests: Psychometric properties with a primary focus on equating (Volume 3). (CASMA Monograph Number 2.3) (pp. 7–23). Iowa City, IA: CASMA, The University of Iowa.
  • Panidvadtana, P., Sujiva, S., & Srisuttiyakorn, S. (2021). A Comparison of the accuracy of multidimensional irt equating methods for mixed-format tests. Kasetsart Journal of Social Sciences, 42, 215-220.
  • Parshall, C. G., Du Bose Houghton, P., & Kromrey, J.D. (1995). Equating error and statistical bias in small sample linear equating. Journal of Educational Measurement, 32, 37–54.
  • Parshall, C. G., Kromrey, J. D., Chason, W., & Yi, Q. (1997, June). Evaluation of parameter estimation under IRT models and small samples. Paper presented at the Psychometric Society, Gatlinburg, USA.
  • Pekmezci, F. B. (2018). İki faktör modelde (bifactor) diklik varsayımının farklı koşullar altında sınanması (Doktora Tezi), Ankara Üniversitesi, Ankara.
  • Peterson, J. L. (2014). Multidımensional item response theory observed score equating methods for mixed-format tests [Doktora Tezi, Iowa Üniversitesi].
  • Puhan, G. (2011). Futility of Log-Linear Smoothing when Equating with Unrepresentative Small Samples. Journal of Educational Measurement, 48(3), 274-292.
  • R Core Team (2022). R: A language and environment for statistical computing [Computer software]. R Foundation for Statistical Computing, Vienna, Austria. Retrieved from http://www.R-project.org/ Ree, M. J., & Jensen, H. E. (1983). Effects of sampe size on linear equating of item characteristic curve parameters. In Weiss, D. J. (Ed.), New Horizons in Testing (pp. 135–146). Elsevier.
  • Sass, D. A., & Schmitt, T. A. (2010). A comparative investigation of rotation criteria within exploratory factor analysis. Multivariate Behavioral Research, 45, 73-103.
  • Skaggs, G. (2005). Accuracy of random groups equating with very small samples. Journal of Educational Measurement, 42(4), 309-330.
  • Skaggs, G., & Lissitz, R. W. (1986). IRT test equating: Relevant issues and a review of recent research. Review of Educational Research, 56 (4), 495-529.
  • Swaminathan, J., & Gifford, J. A. (1983). Estimation of parameters in the three-parameter latent trait model. In D. J. Weiss (Ed.), New horizons in testing (pp. 13–30). New York: Academic
  • Swygert, K. A., McLeod, L. D., & Thissen, D. (2001). Factor analysis for items or testlets scored in more than two categories. In D. Thissen & H. Wainer (Eds.), Test scoring (pp. 217–250). Mahwah, NJ: Erlbaum
  • Tao, W., & Cao, Y. (2016). An extension of irt-based equating to the dichotomous testlet response theory model. Applied Measurement in Education, 29, 108-121.
  • Tate, R. (2003). A comparison of selected empirical methods for assessing the structure of responses to test items. Applied Psychological Measurement, 27, 159–203.
  • Tsai, T, H. (1997, March). Estimating minimum sample sizes in random groups equating. Presented at the National Council of Measurement in Education Meeting, Chicago, USA.
  • Wang, S. & Liu, H. (2018). Minimum sample size needed for equipercentile equating under the random groups design. M. J. Kolen ve W. Lee (Ed.), Mixed-format tests: Psychometric properties with a primary focus on equating (vol 2.5, s. 107-126) içinde. Center for Advanced Studies in Measurement and Assessment.
  • Wang, T. (2006). Standard errors of equating for equipercentile equating with log-linear pre-smoothing using the delta method (CASMA Research Report, No. 14). Center for Advanced Studies in Measurement and Assessment, Iowa
  • Zhang, O. (2012). Observed score and tru score equating form multidimensional response theory under nonequivalent group anchor test design [Doktora Tezi, Florida Üniversitesi].
  • Zor, Y. M. (2023). Investigation of multidimensional scale transformation methods applied to multidimensional test according to various conditions. Adiyaman University Journal of Educational Sciences, 13(1),41-53.

RANDOM GRUP DESENİ ALTINDA TAM MIRT EŞİTLEMEDE ÖRNEKLEM BÜYÜKLÜĞÜNÜN ETKİSİ

Year 2024, Issue: 71, 53 - 82, 31.07.2024
https://doi.org/10.21764/maeuefd.1361350

Abstract

Bu çalışmanın amacı random grup deseni altında Tam ÇB-MTK gözlenen puan eşitleme yönteminin doğruluğu üzerinde farklı örneklem büyüklüğü düzeylerinin etkisini boyutlarda yer alan madde sayısı koşulu altında belirlemektir. Çalışmada random grup deseni altında üretilen simülasyon veri setlerinden yararlanılmıştır. Geniş bir örneklem büyüklüğü aralığını incelemek için, örneklem büyüklüğü (N) 500’den 8.000'e kadar 500’er arttırılmıştır. 16 örneklem büyüklüğü düzeyi ve boyutlarda yer alan 3 farklı madde sayısı koşulu için eşitlemenin standart hatası (SEE), yanlılık (BIAS) ve hata kareler ortalamasının karekökü (RMSE) değerleri incelenmiştir. Araştırmanın sonuçlarına göre SEE ve RMSE değerleri uç değerlere doğru artmaktadır. BIAS değerleri, ortalama ham puanın altındaki puanlar için negatif, üstündeki puanlar için ise pozitiftir. Örneklem büyüklüğü arttıkça SEE, BIAS ve RMSE değerlerinin azaldığı gözlenmiştir. Örneklem büyüklüğü 4000 ve üzerinde olduğunda hata değerlerinde önemli değişim gözlenmemektedir. Random grup deseni altında Tam ÇB-MTK gözlenen puan eşitleme yöntemi için boyutlarda yer alan madde sayısı koşulu altında 4000 örneklem büyüklüğünün yeterli olduğu sonucuna ulaşılmıştır.

References

  • Albano, A. D. (2016). equate: An R package for observed-score linking and equating. Journal of Statistical Software, 74(8), 1-36. https://doi.org/10.18637/jss.v074.i08
  • Asiret, S., & Sünbül, S. Ö. (2016). Investigating test equating methods in small samples through various factors. Educational Sciences: Theory and Practice, 16(2), 647-668.
  • Atar, B., & Yeşiltaş, G. (2017). Çok boyutlu eşitleme yöntemlerinin eşdeğer olmayan gruplarda ortak madde deseni için performanslarının incelenmesi. Eğitimde ve Psikolojide Ölçme ve Değerlendirme Dergisi, 8(4), 421-434.
  • Baldwin, P. (2006, April). A modified IRT model intended to improve parameter estimates under small sample conditions. Presented at the National Council on Measurement in Education, San Francisco, USA.
  • Barnes, L. L. B., & Wise, S. L. (1991). The utility of a modified one-parameter IRT model with small samples. Applied Measurement in Education, 4(2), 143–157.
  • Bolt, D. M. (1999). Evaluating the effects of multidimensionality on IRT true-score equating. Applied Measurement in Education, 12(4), 383-407.
  • Brossman, B. G. (2010). Observed score and true score equating procedures for multidimensional item response theory [Doktora Tezi, Iowa Üniversite].
  • Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling. A Multidisciplinary Journal, 14, 464–504. https://doi.org/10.1080/10705510701301834
  • Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9, 233–255.
  • Choi, J. (2019). Comparison of mirt observed score equating methods under the common-item nonequivalent groups design [Doktora Tezi, Iowa Üniversite].
  • Cui, Z., & Kolen, M. J. (2009). Evaluation of two new smoothing methods in equating: The cubic b-spline presmoothing method and the direct presmoothing method. Journal of Educational Measurement, 46(2), 135–158.
  • Çokluk, Ö., Uçar, A., & Balta, E. (2022). Madde tepki kuramına dayalı gerçek puan eşitlemede ölçek dönüştürme yöntemlerinin i̇ncelenmesi. Ankara Universitesi Egitim Bilimleri Fakultesi Dergisi. https://doi.org/10.30964/auebfd.1001128
  • Finch, H. (2006). Comparison of the performance of varimax and promax rotations: Factor structure recovery for dichotomous items. Journal of Educational Measurement, 43, 39-52.
  • Gök, B. & Kelecioğlu, H. (2014). Comparison of irt equating methods using the common-item nonequivalent groups design. Mersin Üniversitesi Eğitim Fakültesi Dergisi, 10(1), 120-136.
  • Hambleton, R. K., & Cook, L. L. (1983). Robustness of item response models and effects of test length and sample size on the precision of ability estimates. In D. J. Weiss (Ed.), New horizons in testing (pp. 31–49). New York: Academic.
  • Harwell, M. R., & Janosky, J. E. (1991). An empirical study of the effects of small datasets and varying prior variances on item parameter estimation in BILOG. Applied Psychological Measurement, 15(3), 279–291.
  • Karagül, A. E. (2020). Küçük örneklemlerde çok kategorili puanlanan maddelerden oluşan testlerde klasik test eşitleme yöntemlerinin karşılaştırılması [Yüksek Lisans Tezi, Ankara Üniversitesi]. Yöktez.
  • Kilmen, S. (2010). Madde tepki kuramı’na dayalı test eşitleme yöntemlerinden kestirilen eşitleme hatalarının örneklem büyüklüğü ve yetenek dağılımına göre karşılaştırılması [Doktora Tezi, Ankara Üniversitesi]. Yöktez.
  • Kilmen, S., & Demirtaşlı, N. (2012). Comparison of test equating methods based on item response theory according to the sample size and ability distribution. Procedia - Social and Behavioral Sciences, 46, 130-134. doi: 10.1016/j.sbspro.2012.05.081
  • Kim, K. Y. (2022). Item response theory true score equating for the bifactor model under the common-item nonequivalent groups design. Psychological Measurement 46(6). 479-493.
  • Kim, S. Y., Lee, W., & Kolen, M. J. (2019). Simple-structure multidimensional item response theory equating for multidimensional test. Educational and Psychological Measurement, 80(1). 91-125.
  • Kolen, M. J., & Brennan, R. L. (2014). Test equating, scaling, and linking: Methods and practices (3). New York, NY: Springer.
  • Kumlu, G. (2019). Test ve alt testlerde eşitlemenin farklı koşullar açısından incelenmesi (Doktora Tezi). Hacettepe Üniversitesi, Ankara.
  • Lee, E. (2013). Equating multidimensional test under a random groups design: A comparison of various equating procedures (Doctoral Dissertation). University of Iowa, USA.
  • Lee, E., Lee, W. C., & Brennan, R. L. (2014). Equating multidimensional tests under a random groups design: A comparison of various equating procedures. (CASMA Research Report No. 40). Center for Advanced Studies in Measurement and Assessment, The University of Iowa
  • Lee, G., Lee, W., Kolen, M. J., Park, I. Y., Kim, D. I., & Yang, J. S. (2015). Bi-factor mirt true-score equating for testlet-based tests. Journal of Educational Evaluation, 28, 681-700.
  • Lee, G., & Lee, W. (2016). Bi-factor mirt observed-score equating for mixed-format tests. Applied Measurement in Education, 29, 224-241.
  • Lee, W., & Brossman, B. G. (2012). Observed score equating for mixed-format tests using a simple-structure multidimensional IRT framework. M. J. Kolen ve W. Lee (Ed.), Mixed-format tests: Psychometric properties with a primary focus on equating (vol 2.2, s. 115-142) içinde. Center for Advanced Studies in Measurement and Assessment.
  • Li, Y. H., & Lissitz, R. W. (2000). An evaluation of the accuracy of multidimensional IRT linking. Applied Psychological Measurement, 24(2), 115-138.
  • Linacre, J. M. (1994). Sample size and item calibration [or person measure] stability. Rasch Measurement Transactions, 7(4), 328.
  • Liu, C., & Kolen, M. J. (2011). Automated selection of smoothing parameters in equipercentile equating. In M. J. Kolen & W. Lee (Eds.), Mixed-format tests: Psychometric properties with a primary focus on equating (Volume 1). (CASMA Monograph Number 2.1) (pp. 237–261). Iowa City, IA: CASMA, The University of Iowa.
  • Livingston, S. A. (1993). Small-sample equating with log-linear smoothing. Journal of Educational Measurement, 30(1), 23–39.
  • Livingston, S. A., & Kim, S. (2010). Ransom group equating with samples of 50 to 400 test takers. Journal of Educational Measurement, 47(2), 175-185.
  • Lord, F. M., & Wingersky, M. S. (1984). Comparison of IRT true-score and equipercentile observed-score “equatings.” Applied Psychological Measurement, 8, 452-461.
  • McDonald, R. P. (2000). A basis for multidimensional item response theory. Applied Psychological Measurement, 24, 99-114.
  • Morris, T. P., White, I. R., & Crowther, M. J. (2019). Using simulation studies to evaluate statistical methods. Statistics in Medicine, 1-29. doi:10.1002/sim.8086
  • Pak, S., & Lee, W. C. (2014). An investigation of performance of equating for mixed-format tests using only multiple-choice common items. In M. J. Kolen & W. Lee (Eds.), Mixed-format tests: Psychometric properties with a primary focus on equating (Volume 3). (CASMA Monograph Number 2.3) (pp. 7–23). Iowa City, IA: CASMA, The University of Iowa.
  • Panidvadtana, P., Sujiva, S., & Srisuttiyakorn, S. (2021). A Comparison of the accuracy of multidimensional irt equating methods for mixed-format tests. Kasetsart Journal of Social Sciences, 42, 215-220.
  • Parshall, C. G., Du Bose Houghton, P., & Kromrey, J.D. (1995). Equating error and statistical bias in small sample linear equating. Journal of Educational Measurement, 32, 37–54.
  • Parshall, C. G., Kromrey, J. D., Chason, W., & Yi, Q. (1997, June). Evaluation of parameter estimation under IRT models and small samples. Paper presented at the Psychometric Society, Gatlinburg, USA.
  • Pekmezci, F. B. (2018). İki faktör modelde (bifactor) diklik varsayımının farklı koşullar altında sınanması (Doktora Tezi), Ankara Üniversitesi, Ankara.
  • Peterson, J. L. (2014). Multidımensional item response theory observed score equating methods for mixed-format tests [Doktora Tezi, Iowa Üniversitesi].
  • Puhan, G. (2011). Futility of Log-Linear Smoothing when Equating with Unrepresentative Small Samples. Journal of Educational Measurement, 48(3), 274-292.
  • R Core Team (2022). R: A language and environment for statistical computing [Computer software]. R Foundation for Statistical Computing, Vienna, Austria. Retrieved from http://www.R-project.org/ Ree, M. J., & Jensen, H. E. (1983). Effects of sampe size on linear equating of item characteristic curve parameters. In Weiss, D. J. (Ed.), New Horizons in Testing (pp. 135–146). Elsevier.
  • Sass, D. A., & Schmitt, T. A. (2010). A comparative investigation of rotation criteria within exploratory factor analysis. Multivariate Behavioral Research, 45, 73-103.
  • Skaggs, G. (2005). Accuracy of random groups equating with very small samples. Journal of Educational Measurement, 42(4), 309-330.
  • Skaggs, G., & Lissitz, R. W. (1986). IRT test equating: Relevant issues and a review of recent research. Review of Educational Research, 56 (4), 495-529.
  • Swaminathan, J., & Gifford, J. A. (1983). Estimation of parameters in the three-parameter latent trait model. In D. J. Weiss (Ed.), New horizons in testing (pp. 13–30). New York: Academic
  • Swygert, K. A., McLeod, L. D., & Thissen, D. (2001). Factor analysis for items or testlets scored in more than two categories. In D. Thissen & H. Wainer (Eds.), Test scoring (pp. 217–250). Mahwah, NJ: Erlbaum
  • Tao, W., & Cao, Y. (2016). An extension of irt-based equating to the dichotomous testlet response theory model. Applied Measurement in Education, 29, 108-121.
  • Tate, R. (2003). A comparison of selected empirical methods for assessing the structure of responses to test items. Applied Psychological Measurement, 27, 159–203.
  • Tsai, T, H. (1997, March). Estimating minimum sample sizes in random groups equating. Presented at the National Council of Measurement in Education Meeting, Chicago, USA.
  • Wang, S. & Liu, H. (2018). Minimum sample size needed for equipercentile equating under the random groups design. M. J. Kolen ve W. Lee (Ed.), Mixed-format tests: Psychometric properties with a primary focus on equating (vol 2.5, s. 107-126) içinde. Center for Advanced Studies in Measurement and Assessment.
  • Wang, T. (2006). Standard errors of equating for equipercentile equating with log-linear pre-smoothing using the delta method (CASMA Research Report, No. 14). Center for Advanced Studies in Measurement and Assessment, Iowa
  • Zhang, O. (2012). Observed score and tru score equating form multidimensional response theory under nonequivalent group anchor test design [Doktora Tezi, Florida Üniversitesi].
  • Zor, Y. M. (2023). Investigation of multidimensional scale transformation methods applied to multidimensional test according to various conditions. Adiyaman University Journal of Educational Sciences, 13(1),41-53.
There are 56 citations in total.

Details

Primary Language Turkish
Subjects Similation Study
Journal Section Makaleler
Authors

Burcu Demiröz

Nuri Doğan

Publication Date July 31, 2024
Submission Date September 15, 2023
Published in Issue Year 2024 Issue: 71

Cite

APA Demiröz, B., & Doğan, N. (2024). RANDOM GRUP DESENİ ALTINDA TAM MIRT EŞİTLEMEDE ÖRNEKLEM BÜYÜKLÜĞÜNÜN ETKİSİ. Mehmet Akif Ersoy Üniversitesi Eğitim Fakültesi Dergisi(71), 53-82. https://doi.org/10.21764/maeuefd.1361350