Araştırma Makalesi
BibTex RIS Kaynak Göster

Uzaktan Eğitimde Bilgisayar Uyarlamalı Test ve Kağıt-Kalem Testi ile Yetenek Düzeylerinin Belirlenmesi: Çok Merkezli Gerçek Veri Uygulaması

Yıl 2022, Cilt: 21 Sayı: 63, 95 - 103, 30.04.2022
https://doi.org/10.25282/ted.1003962

Öz

Amaç: İçinde bulunduğumuz Covid-19 pandemi sürecinde uzaktan eğitimin popüler hale gelmesi ile yapılan çevrimiçi sınavlarda kişilerin yetenek düzeyini daha doğru şekilde belirlemek ve sınav stratejilerini çözümlemek için bilgisayar uyarlamalı test kullanımı giderek önem kazanmaktadır. Bu çalışmada, uzaktan eğitim sürecinde öğrencilerin yetenek düzeylerinin bilgisayar uyarlamalı test ve kağıt-kalem testi ile kestirimlerinin karşılaştırılması amaçlanmıştır.
Yöntem: Araştırma, Mart 2020 – Aralık 2020 tarihleri arasında gerçekleştirilen kesitsel ve metodolojik bir araştırmadır. Araştırmanın çalışma evrenini Erzincan Binali Yıldırım Üniversitesi, Gazi Üniversitesi, Erciyes Üniversitesi ve Kastamonu Üniversitesi Spor Bilimleri Fakültelerinde öğrenim gören beden eğitimi ve spor öğretmenliği 4. sınıf öğrencileri oluşturmaktadır. Bu çalışmada R programlama dili (ver. 3.6.2) ve RStudio (ver. 1.2.5033) yazılımı kullanılarak analizler gerçekleştirilmiştir. Kişi sayısı 176; madde bankasındaki madde sayısı 80 (4 farklı alanın (egzersiz fizyolojisi, psikomotor gelişim, antrenman bilgisi, egzersiz ve beslenme) her birinden uzmanlar tarafından hazırlanan 20’şer soru ile oluşturulmuştur); bilgisayar uyarlamalı test durdurma kriteri için standart hata <0,5 olarak alınmıştır. Kağıt-kalem testi ve bilgisayar uyarlamalı test θ değerlerinin uyumları sınıf içi korelasyon katsayısı ile değerlendirilmiştir. Kişilere ait bilgisayar uyarlamalı test kestirimleri catR paketindeki “randomCAT” fonksiyonu ile gerçekleştirilmiştir. Bilgisayar uyarlamalı testten elde edilen kişinin yetenek düzeyi, θ’ya ait standart hata ve kullanılan madde sayısı analiz sonucunda sunulmuştur. Bilgisayar uyarlamalı testte madde seçimi için maksimum Fisher bilgi kriteri kullanılmıştır. Yetenek düzeyi kestirimlerinde iteratif yaklaşımlar için en çok olabilirlik kestirimi kullanılmıştır.
Bulgular: Klasik test skoru (0-100 skalasında) ve bilgisayar uyarlamalı test arasında orta düzeyde uyum saptanmıştır (sınıf içi korelasyon katsayısı:0,682, güven aralığı %95GA=0,594-0,765, F(175,175)=17,1; p<0.001). Çalışmada, kağıt kalem testinden elde edilen cevaplardan madde yanıt teorisi doğrultusunda elde edilen yetenek düzeyleri ve bilgisayar uyarlamalı test ile yetenek düzeyi kestirimleri incelendiğinde uyumun mükemmel düzeyde (sınıf içi korelasyon katsayısı:0,90; %95GA:0,866-0,924) olduğu görülmüştür. Ayrıca bilgisayar uyarlamalı testin kağıt-kalem testine göre soru sayısını %80 oranında azalttığı bulunmuştur.
Sonuç: Bu çalışmadan elde edilen sonuçlar, bilgisayar uyarlamalı test uygulaması ile daha kısa sürede, daha az madde sayısı ile güvenli sonuçlar elde edilebileceğini ve testin tamamlanmasından kısa süre sonra sonuçların izlenebileceğini göstermektedir.

Kaynakça

  • Turgut MF. Eğitimde ölçme ve değerlendirme metotları (10. Baskı). Ankara: Gül Yayınevi. 1997.
  • Reeve BB. An introduction to modern measurement theory. National Cancer Institute, 2002;1-67.
  • Weiss DJ, Kingsbury GG. Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 1984;21(4):361-75.
  • Rudner LM. An online, interactive, computer adaptive testing tutorial. 11/98. Erişim Adresi: [http://edres.org/scripts/cat/catdemo.htm]. Erişim tarihi: 05/11/2019.
  • Rezaie M, Golshan M. Computer adaptive test (CAT): Advantages and limitations. International Journal of Educational Investigations. 2015;2(5):128-37.
  • Callear D, King T. Using computer-based tests for information science. ALT-J, 1997;5(1):27-32.
  • Madsen HS. Computer-adaptive testing of listening and reading comprehension. (In P. Dunkel Ed.), Computer-assisted language learning and testing: Research issues and practice. New York: Newbury House. 1991;237-57.
  • Wainer H. Computerized Adaptive Testing: A primer. Hillsdale, N J: LEA. 1990.
  • Young R, Shermis MD, Brutten SR, Perkins K. From conventional to computer-adaptive testing of ESL reading comprehension. System. 1996;24(1):23-40.
  • R Core Team R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL: https://www.R-project.org/. 2020.
  • Cicchetti DV. "Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology". Psychological Assessment. 1994;6(4): 284–90.
  • Gamer M, Lemon J, Gamer MM, Robinson A, Kendall's W. Package ‘irr’. Various coefficients of interrater reliability and agreement. CRAN R Project. 2012.
  • Lehnert B. BlandAltmanLeh: plots (slightly extended) Bland-Altman plots. R package version 0.3,1. 2015.
  • Si CF, Schumacker RE. Ability estimation under different item parameterization and scoring models. International Journal of Testing. 2004;4(2):137-81.
  • Rasch G, Probabilistic models for some intelligence and attainment tests. University of Chicago Press. 1960.
  • Lee YH, Chen H. A review of recent response-time analyses in educational testing. Psychological Test and Assessment Modeling. 2011;53(3):359.
  • Bennett RE, Braswell J, Oranje A, Sandene B, Kaplan B, Yan F. Does it matter if I take my mathematics test on computer? A second empirical study of mode effects in NAEP. Journal of Technology, Learning, and Assessment. 2008;6(9):1–39.
  • Threlfall J, Pool P, Homer M, Swinnerton B. Implicit aspects of paper and pencil mathematics assessment that come to light through the use of the computer. Educational Studies in Mathematics. 2007;66(3):335–48.
  • Wang S, Jiao H, Young M, Brooks T, Olson J. Comparability of computer-based and paper-and-pencil testing in K-12 reading assessments: A meta-analysis of testing mode effects. Educational and Psychological Measurement. 2008;68:5–24.
  • Şenel S, Kutlu Ö. Comparison of two test methods for VIS: paper-pencil test and CAT. European Journal of Special Needs Education. 2018;33(5):631-45.
  • De Ayala RJ. The theory and practice of item response theory. Guilford Publications. 2013.

Determining Ability Levels by Using Computerized Adaptive Testing and Paper-And-Pencil Testing in the Distance Education: A Multi-Centred Real Data Study

Yıl 2022, Cilt: 21 Sayı: 63, 95 - 103, 30.04.2022
https://doi.org/10.25282/ted.1003962

Öz

Aim: With the popularity of distance education in the current Covid-19 pandemic, the use of computerized adaptive testing is becoming increasingly important in order to determine the ability level of individuals more accurately and to analyze the exam strategies in online exams. In this study, it is aimed to compare the estimations of students' ability levels in the distance education with computerized adaptive testing and paper-pencil test.
Methods: This study is a cross-sectional and methodological study which was conducted between March 2020 and December 2020. The study population consists of 4th class students which were studying
physical education and sports teaching at Erzincan Binali Yıldırım University, Gazi University, Erciyes University and Kastamonu University Sports Sciences Faculties. In this study, analyzes were performed
by using R programming language (ver. 3.6.2) and RStudio (ver. 1.2.5033) software. Number of people was 176; the number of items in the question bank was 80 (formed with 20 questions each prepared by
experts from each of 4 different fields (exercise physiology, psychomotor development, training information, exercise and nutrition)); the standard error for the computerized adaptive test stopping
criterion was taken as <0.5. The agreement of Paper-And-Pencil Test and computerized adaptive test θ values were evaluated with the intraclass correlation coefficient. CAT estimates of individuals were
performed with the “randomCAT” function in the catR package. The ability level of a person was obtained with computerized adaptive test application, the standard error of θ and the number of items used were presented as a result of the analysis. Maximum Fisher information criterion was used for question selection in computerized adaptive test. Maximum likelihood estimation was used for iterative approaches in the ability level estimations.
Results: There was moderate agreement between the classical test score (on a 0-100 scale) and the computer-adapted test (intraclass correlation coefficient: 0.682, confidence interval 95%CI=0.594-
0.765, F(175.175)=17.1; p<0.001). In this study, the ability level estimations by using the item response theory from the answers obtained from the paper-and-pencil test and the ability level estimations with the
computer adaptive test were examined. It was seen that the agreement was at an excellent level (intraclass correlation coefficient: 0.90; 95%CI: 0.866-0.924). Also it was also found that computerized adaptive test reduced the number of addressed questions by 80% when compared to Paper-And-Pencil Test.
Conclusions: The results obtained from this study show that with the application of computer-adapted testing, safe results can be obtained in a shorter time, with less number of items, and the results can be
monitored shortly after the completion of the test.

Kaynakça

  • Turgut MF. Eğitimde ölçme ve değerlendirme metotları (10. Baskı). Ankara: Gül Yayınevi. 1997.
  • Reeve BB. An introduction to modern measurement theory. National Cancer Institute, 2002;1-67.
  • Weiss DJ, Kingsbury GG. Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 1984;21(4):361-75.
  • Rudner LM. An online, interactive, computer adaptive testing tutorial. 11/98. Erişim Adresi: [http://edres.org/scripts/cat/catdemo.htm]. Erişim tarihi: 05/11/2019.
  • Rezaie M, Golshan M. Computer adaptive test (CAT): Advantages and limitations. International Journal of Educational Investigations. 2015;2(5):128-37.
  • Callear D, King T. Using computer-based tests for information science. ALT-J, 1997;5(1):27-32.
  • Madsen HS. Computer-adaptive testing of listening and reading comprehension. (In P. Dunkel Ed.), Computer-assisted language learning and testing: Research issues and practice. New York: Newbury House. 1991;237-57.
  • Wainer H. Computerized Adaptive Testing: A primer. Hillsdale, N J: LEA. 1990.
  • Young R, Shermis MD, Brutten SR, Perkins K. From conventional to computer-adaptive testing of ESL reading comprehension. System. 1996;24(1):23-40.
  • R Core Team R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL: https://www.R-project.org/. 2020.
  • Cicchetti DV. "Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology". Psychological Assessment. 1994;6(4): 284–90.
  • Gamer M, Lemon J, Gamer MM, Robinson A, Kendall's W. Package ‘irr’. Various coefficients of interrater reliability and agreement. CRAN R Project. 2012.
  • Lehnert B. BlandAltmanLeh: plots (slightly extended) Bland-Altman plots. R package version 0.3,1. 2015.
  • Si CF, Schumacker RE. Ability estimation under different item parameterization and scoring models. International Journal of Testing. 2004;4(2):137-81.
  • Rasch G, Probabilistic models for some intelligence and attainment tests. University of Chicago Press. 1960.
  • Lee YH, Chen H. A review of recent response-time analyses in educational testing. Psychological Test and Assessment Modeling. 2011;53(3):359.
  • Bennett RE, Braswell J, Oranje A, Sandene B, Kaplan B, Yan F. Does it matter if I take my mathematics test on computer? A second empirical study of mode effects in NAEP. Journal of Technology, Learning, and Assessment. 2008;6(9):1–39.
  • Threlfall J, Pool P, Homer M, Swinnerton B. Implicit aspects of paper and pencil mathematics assessment that come to light through the use of the computer. Educational Studies in Mathematics. 2007;66(3):335–48.
  • Wang S, Jiao H, Young M, Brooks T, Olson J. Comparability of computer-based and paper-and-pencil testing in K-12 reading assessments: A meta-analysis of testing mode effects. Educational and Psychological Measurement. 2008;68:5–24.
  • Şenel S, Kutlu Ö. Comparison of two test methods for VIS: paper-pencil test and CAT. European Journal of Special Needs Education. 2018;33(5):631-45.
  • De Ayala RJ. The theory and practice of item response theory. Guilford Publications. 2013.
Toplam 21 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Sağlık Kurumları Yönetimi
Bölüm Orjinal Araştırma
Yazarlar

Yusuf Kemal Arslan 0000-0003-1308-8569

Mergül Çolak 0000-0002-4762-8298

Ulviye Bilgin 0000-0001-5871-0089

Yayımlanma Tarihi 30 Nisan 2022
Gönderilme Tarihi 3 Ekim 2021
Yayımlandığı Sayı Yıl 2022 Cilt: 21 Sayı: 63

Kaynak Göster

Vancouver Arslan YK, Çolak M, Bilgin U. Determining Ability Levels by Using Computerized Adaptive Testing and Paper-And-Pencil Testing in the Distance Education: A Multi-Centred Real Data Study. Tıp Eğitimi Dünyası. 2022;21(63):95-103.