BibTex RIS Cite

Primary School Students’ Attitudes Towards 
Computer Based Testing And Assessment In Turkey

Year 2012, Volume: 13 Issue: 3, 177 - 188, 01.09.2012

Abstract

This study investigated the attitudes of primary school students towards computer based testing and assessment in terms of different variables. The sample for this research is primary school students attending a computer based testing and assessment application via CITO-ÖİS. The “Scale on Attitudes towards Computer Based Testing and Assessment” to collect data and the results obtained were compared in terms of school type, gender, and grade level. The results of this study revealed that significant differences exist between attitudes of students from different schools. Such a difference does not exist between attitudes of students when their genders, grade levels, and participation periods to computer based assessment are taken into account.

References

  • Berberoğlu, G. (2011). Cito Türkiye Öğrenci İzleme Sistemi [Turkey Student
  • Monitoring System]. http://www.cito.com.tr. 03.03.2011.
  • Bennett, R.E., Braswell, J., Oranje, A., Sandene, B., Kaplan, B., Yan, F. (2008). Does it matter if I take my mathematics test on computer? A second empirical study of mode effects in NAEP. Journal of Technology, Learning, and Assessment, 6(9).
  • Bernard E. W. Jr. (1997). Gender Differences in Computer-RelatedAttitudes and Behavior: A Meta-Analysis, Computers in Human Behavior, 13(1), 1-22,
  • Bugbee, A. C. (1996). The equivalence of paper-and-pencil and computer-based testing. Journal of Research on Computing in Education, 28(3), 282-299.
  • Cheung, C. M. K., Lee, M. K. O., & Chen, Z. (2002). Using the Internet as a learning medium: An exploration of gender difference in the adoption of FaBWeb. In
  • Proceedings of the 35th Hawaii international conference on system sciences (Held at Hawaii on 7–10 January 2002).
  • Clariana R., & Wallace P. (2002). Paper-based versus computer-based assessment: key factors associated with the test mode effect. British Journal of Educational Technology, 33(5), 593-602.
  • Drasgow, F., & Olsen-Buchanan, J. B. (1999). Innovations in computerized assessment. Mahwah, NJ: Erlbaum.
  • Gallagher, A., Bridgeman, B., & Cahalan, C. (2000). The effect of computer-based tests on racial/ethnic, gender, and language groups (GRE Board Professional Report No. 96–21P). Princeton, NJ: Education Testing Service.
  • Glass, G. V., Peckham, P. D., & Sanders, J. R. (1972). Consequences of failure to meet assumptions underlying the fixed-effects analysis of variance and covariance.
  • Reviewof Educational Research, 42, 237-288. Goldberg, A., & Pedulla, J.J. (2002). Performance differences according to test mode and computer familiarity on a practice GRE. Educational and Psychological Measurement, 62(6), 1053-1067.
  • Gvozdenko, E., & Chambers, D. (2007). Beyond test accuracy: Benefits of measuring response time in computerised testing. Australasian Journal of
  • Educational Technology, 23(4), 542–558. Joosten-ten Brinke, D., van Bruggen, J., Hermans, H., Burgers, J., Giesbers, B., Koper, R., et al. (2007). Modeling assessment for re-use of traditional and new types of assessment. Computers in Human Behavior, 23(6), 2721–2741.
  • Kaklauskas, A., Zavadskas, E. K., Pruskus, V., Vlasenko, A., Seniut, M., Kaklauskas, G., et al. (2010). Biometric and intelligent self-assessment of student progress system. Computers & Education, 55(2), 821–833.
  • Kesici, S., Sahin, I., & Akturk, A. O. (2009). Analysis of cognitive learning strategies and computer attitudes, according to college students’ gender and locus of control.
  • Computers in Human Behavior, 25, 529–534. Kim, D. H., & Huynh, H. (2007). Comparability of computer and paper-and-pencil versions of Algebra and Biology assessments. Journal of Technology, Learning, and Assessment, 6(4).
  • Kingston N. M. (2009). Comparability of computer- and paper-administered multiple-choice tests for K-12 populations: A synthesis. Applied Measurement in Education, 22(1), 22-37.
  • Leeson H. V. (2006). The mode effect: A literature review of human and technological issues in computerized testing. International Journal of Testing, 6(1), 24.
  • Lunz, M. E., & Bergstrom, B. A. (1994). An empirical study of computerized adaptive test administration conditions.Journal of Educational Measurement, 31(3), 251-263.
  • McKee, L. M., & Levinson, E. M. (1990). A review of the computerized version of the Self-Directed Search. Career Development Quarterly, 38(4), 325-333.
  • Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and- pencil cognitive ability tests: A meta-analysis.Psychological Bulletin, 11(4), 449-458.
  • Mazzeo, J., & Harvey, A. L. (1988). The equivalence of scores from automated and conventional educational and psychological tests: A review of the literature.
  • (College Board Report 88-8). New York: College Entrance Examination Board. Neuman, G., & Baydoun, R. (1998). Computerization of paper-and-pencil tests:
  • When are they equivalent. Applied Psychological Measurement, 22, 71-83. Parshall, C. G., Spray, J. A., Kalohn, J. C., & Davey, T. (2002). Practical considerations in computer-based testing. New York: Springer.
  • Pomplun, M., Frey, S., & Becker, D.F. (2002). The score equivalence of paper and computerized versions of a speeded test of reading comprehension. Educational and Psychological Measurement, 62(2), 337-354.
  • Pomplun, M., & Custer, M. (2005). The score comparability of computerized and paper-and-pencil formats for K-3 reading tests. Journal of Educational Computing Research, 32(2), 153-166.
  • Pomplun M., Ritchie, T., & Custer M. (2006). Factors in paper-and-pencil and computer reading score differences at the primary grades. Educational Assessment, (2), 127-143.
  • Russo, A. (2002). Mixing Technology and Testing, Computer-Based Testing, The School Administrator,http://www.aasa.org/SchoolAdministratorArticle.aspx?id=10354, 05.2012.
  • Schulenberg, S. E., & Yutrzenka, B. A. (1999). The equivalence of computerized and paper-and-pencil psychological instruments: Implications for measures of negative effect. Behavior Research Methods, Instruments and Computers, 31(2), 315–321.
  • Shermis, M. D., & Lombard, D. (1998). Effects of computer-based test administrations on test anxiety and performance. Computers in Human Behavior, (1), 111–123.
  • Smith, B. and Caputi, p. (2007). Cognitive interference model ofcomputer anxiety:
  • Implications for computer-based assessment, Computers in Human Behavior 23; –1498.
  • Thelwall, M. (2000). Computer-based assessment: A versatile educational tool.
  • Computers & Education, 34(1), 37–49. Trotter, A. (2001). Testing firms see future market in online assessment, Education Week on the Web 20(4) 6.
  • Tseng, H., Macleod, H. A., & Wright, P. (1998). Computer anxiety and measurement of mood change. Computers in Human Behavior, 13(3), 305–316.
  • Vispoel,W. P. (2000). Reviewing and changing answers on computerized fixed-item vocabulary tests. Educational and Psychological Measurement, 60, 371–384.
  • Wang, S., Jiao, H., Young, M. J., Brooks, T. E., & Olson, J. (2007). A meta-analysis of testing mode effects in Grade K–12 mathematics tests. Educational and Psychological Measurement, 67, 219-238.
  • Wang, S., Jiao, H., Young, M. J., Brooks, T. E., & Olson, J. (2008). Comparability of computer-based and paper-and-pencil testing in K-12 assessment: A meta-analysis of testing mode effects. Educational and Psychological Measurement, 68, 5-24.
  • Yuen, H. K., & Ma, W. K. (2002). Gender differences in teacher computer acceptance. Journal of Technology and Teacher Education, 10(3), 365–382.
  • Yurdabakan I. (2008). Egitimde Kullanılan Olçme Araclarının Nitelikleri [The reliability and validity of measurement tools]. In Erkan S., Gomleksiz M. (Eds.),
  • Egitimde Olcme Ve Degerlendirme [Measurement And Evaluation in Education] (pp. 66). Ankara, Turkey: Nobel Yayın Dağıtım [Nobel Puplisher].

Assist. Prof. Dr. Irfan YURDABAKAN DEU Buca Education Faculty

Year 2012, Volume: 13 Issue: 3, 177 - 188, 01.09.2012

Abstract

References

  • Berberoğlu, G. (2011). Cito Türkiye Öğrenci İzleme Sistemi [Turkey Student
  • Monitoring System]. http://www.cito.com.tr. 03.03.2011.
  • Bennett, R.E., Braswell, J., Oranje, A., Sandene, B., Kaplan, B., Yan, F. (2008). Does it matter if I take my mathematics test on computer? A second empirical study of mode effects in NAEP. Journal of Technology, Learning, and Assessment, 6(9).
  • Bernard E. W. Jr. (1997). Gender Differences in Computer-RelatedAttitudes and Behavior: A Meta-Analysis, Computers in Human Behavior, 13(1), 1-22,
  • Bugbee, A. C. (1996). The equivalence of paper-and-pencil and computer-based testing. Journal of Research on Computing in Education, 28(3), 282-299.
  • Cheung, C. M. K., Lee, M. K. O., & Chen, Z. (2002). Using the Internet as a learning medium: An exploration of gender difference in the adoption of FaBWeb. In
  • Proceedings of the 35th Hawaii international conference on system sciences (Held at Hawaii on 7–10 January 2002).
  • Clariana R., & Wallace P. (2002). Paper-based versus computer-based assessment: key factors associated with the test mode effect. British Journal of Educational Technology, 33(5), 593-602.
  • Drasgow, F., & Olsen-Buchanan, J. B. (1999). Innovations in computerized assessment. Mahwah, NJ: Erlbaum.
  • Gallagher, A., Bridgeman, B., & Cahalan, C. (2000). The effect of computer-based tests on racial/ethnic, gender, and language groups (GRE Board Professional Report No. 96–21P). Princeton, NJ: Education Testing Service.
  • Glass, G. V., Peckham, P. D., & Sanders, J. R. (1972). Consequences of failure to meet assumptions underlying the fixed-effects analysis of variance and covariance.
  • Reviewof Educational Research, 42, 237-288. Goldberg, A., & Pedulla, J.J. (2002). Performance differences according to test mode and computer familiarity on a practice GRE. Educational and Psychological Measurement, 62(6), 1053-1067.
  • Gvozdenko, E., & Chambers, D. (2007). Beyond test accuracy: Benefits of measuring response time in computerised testing. Australasian Journal of
  • Educational Technology, 23(4), 542–558. Joosten-ten Brinke, D., van Bruggen, J., Hermans, H., Burgers, J., Giesbers, B., Koper, R., et al. (2007). Modeling assessment for re-use of traditional and new types of assessment. Computers in Human Behavior, 23(6), 2721–2741.
  • Kaklauskas, A., Zavadskas, E. K., Pruskus, V., Vlasenko, A., Seniut, M., Kaklauskas, G., et al. (2010). Biometric and intelligent self-assessment of student progress system. Computers & Education, 55(2), 821–833.
  • Kesici, S., Sahin, I., & Akturk, A. O. (2009). Analysis of cognitive learning strategies and computer attitudes, according to college students’ gender and locus of control.
  • Computers in Human Behavior, 25, 529–534. Kim, D. H., & Huynh, H. (2007). Comparability of computer and paper-and-pencil versions of Algebra and Biology assessments. Journal of Technology, Learning, and Assessment, 6(4).
  • Kingston N. M. (2009). Comparability of computer- and paper-administered multiple-choice tests for K-12 populations: A synthesis. Applied Measurement in Education, 22(1), 22-37.
  • Leeson H. V. (2006). The mode effect: A literature review of human and technological issues in computerized testing. International Journal of Testing, 6(1), 24.
  • Lunz, M. E., & Bergstrom, B. A. (1994). An empirical study of computerized adaptive test administration conditions.Journal of Educational Measurement, 31(3), 251-263.
  • McKee, L. M., & Levinson, E. M. (1990). A review of the computerized version of the Self-Directed Search. Career Development Quarterly, 38(4), 325-333.
  • Mead, A. D., & Drasgow, F. (1993). Equivalence of computerized and paper-and- pencil cognitive ability tests: A meta-analysis.Psychological Bulletin, 11(4), 449-458.
  • Mazzeo, J., & Harvey, A. L. (1988). The equivalence of scores from automated and conventional educational and psychological tests: A review of the literature.
  • (College Board Report 88-8). New York: College Entrance Examination Board. Neuman, G., & Baydoun, R. (1998). Computerization of paper-and-pencil tests:
  • When are they equivalent. Applied Psychological Measurement, 22, 71-83. Parshall, C. G., Spray, J. A., Kalohn, J. C., & Davey, T. (2002). Practical considerations in computer-based testing. New York: Springer.
  • Pomplun, M., Frey, S., & Becker, D.F. (2002). The score equivalence of paper and computerized versions of a speeded test of reading comprehension. Educational and Psychological Measurement, 62(2), 337-354.
  • Pomplun, M., & Custer, M. (2005). The score comparability of computerized and paper-and-pencil formats for K-3 reading tests. Journal of Educational Computing Research, 32(2), 153-166.
  • Pomplun M., Ritchie, T., & Custer M. (2006). Factors in paper-and-pencil and computer reading score differences at the primary grades. Educational Assessment, (2), 127-143.
  • Russo, A. (2002). Mixing Technology and Testing, Computer-Based Testing, The School Administrator,http://www.aasa.org/SchoolAdministratorArticle.aspx?id=10354, 05.2012.
  • Schulenberg, S. E., & Yutrzenka, B. A. (1999). The equivalence of computerized and paper-and-pencil psychological instruments: Implications for measures of negative effect. Behavior Research Methods, Instruments and Computers, 31(2), 315–321.
  • Shermis, M. D., & Lombard, D. (1998). Effects of computer-based test administrations on test anxiety and performance. Computers in Human Behavior, (1), 111–123.
  • Smith, B. and Caputi, p. (2007). Cognitive interference model ofcomputer anxiety:
  • Implications for computer-based assessment, Computers in Human Behavior 23; –1498.
  • Thelwall, M. (2000). Computer-based assessment: A versatile educational tool.
  • Computers & Education, 34(1), 37–49. Trotter, A. (2001). Testing firms see future market in online assessment, Education Week on the Web 20(4) 6.
  • Tseng, H., Macleod, H. A., & Wright, P. (1998). Computer anxiety and measurement of mood change. Computers in Human Behavior, 13(3), 305–316.
  • Vispoel,W. P. (2000). Reviewing and changing answers on computerized fixed-item vocabulary tests. Educational and Psychological Measurement, 60, 371–384.
  • Wang, S., Jiao, H., Young, M. J., Brooks, T. E., & Olson, J. (2007). A meta-analysis of testing mode effects in Grade K–12 mathematics tests. Educational and Psychological Measurement, 67, 219-238.
  • Wang, S., Jiao, H., Young, M. J., Brooks, T. E., & Olson, J. (2008). Comparability of computer-based and paper-and-pencil testing in K-12 assessment: A meta-analysis of testing mode effects. Educational and Psychological Measurement, 68, 5-24.
  • Yuen, H. K., & Ma, W. K. (2002). Gender differences in teacher computer acceptance. Journal of Technology and Teacher Education, 10(3), 365–382.
  • Yurdabakan I. (2008). Egitimde Kullanılan Olçme Araclarının Nitelikleri [The reliability and validity of measurement tools]. In Erkan S., Gomleksiz M. (Eds.),
  • Egitimde Olcme Ve Degerlendirme [Measurement And Evaluation in Education] (pp. 66). Ankara, Turkey: Nobel Yayın Dağıtım [Nobel Puplisher].
There are 42 citations in total.

Details

Primary Language English
Journal Section Articles
Authors

İrfan Yurdabakan This is me

Cicek Uzunkavak This is me

Publication Date September 1, 2012
Submission Date February 27, 2015
Published in Issue Year 2012 Volume: 13 Issue: 3

Cite

APA Yurdabakan, İ., & Uzunkavak, C. (2012). Primary School Students’ Attitudes Towards 
Computer Based Testing And Assessment In Turkey. Turkish Online Journal of Distance Education, 13(3), 177-188.