Araştırma Makalesi
BibTex RIS Kaynak Göster

Developing and validating a computerized oral proficiency test of English as a foreign language (Coptefl)

Yıl 2021, , 38 - 66, 15.03.2021
https://doi.org/10.21449/ijate.854678

Öz

This study is about the development and validation process of the Computerized Oral Proficiency Test of English as a Foreign Language (COPTEFL). The test aims at assessing the speaking proficiency levels of students in Anadolu University School of Foreign Languages (AUSFL). For this purpose, three monologic tasks were developed based on the Global Scale of English (GSE, 2015) level descriptors. After the development of the tasks, it was aimed to develop the COPTEFL system and then compare the test scores and test-takers’ perspectives on monologic tasks between the COPTEFL and the face-to-face speaking test. The findings from these quantitative and qualitative analyses provided substantial support for the validity and reliability of the COPTEFL and inform the further refinement of the test tasks.

Kaynakça

  • Alderson, J.C., Clapham, C. & Wall, D. (1995). Language test construction and evaluation. Cambridge: Cambridge University Press.
  • Aydın, B., Akay, E., Polat, M. & Geridönmez, S. (2016). Türkiye’deki hazırlık okullarının yeterlik sınavı uygulamaları ve bilgisayarlı dil ölçme fikrine yaklaşımları. Anadolu Üniversitesi Sosyal Bilimler Dergisi, 16(2), 1-20.
  • Aydın, B., Geridönmez, S., Polat, M. & Akay, (2017). Feasibility of computer assisted English proficiency tests in Turkey: A field study. Anadolu Üniversitesi Eğitim Bilimleri Enstitüsü Dergisi, 7(1), 107-122.
  • Bachman, L. F. & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests. Oxford, UK: Oxford University Press.
  • Bachman, L. F., Lynch, B. & Mason, M. (1995). Investigating variability in tasks and rater judgements in a performance test of foreign language speaking. Language Testing, 12(2), 238-57.
  • Carr, N. T. (2011). Designing and analyzing language tests: Oxford handbooks for language teachers. Oxford, UK: Oxford University Press.
  • Chapelle, C. A. (2013). Conceptions of validity. In The Routledge handbook of language testing (pp. 35-47). Routledge.
  • Clark, J. L. D. (1979). Direct vs. semi-direct tests of speaking ability. In E. J. Briere and F. B. Hinofotis (Eds.), Concepts in language testing: some recent studies, 35-49. Washington, DC: TESOL.
  • Council of Europe, (2001). Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Cambridge: Cambridge: Cambridge University Press.
  • Creswell, J. W. (2012). Educational research: Planning, conducting, and evaluating quantitative. Boston: Pearson Education.
  • East, M. (2016). Mediating Assessment Innovation: Why Stakeholder Perspectives Matter. In Assessing Foreign Language Students’ Spoken Proficiency (pp. 1-24). Springer, Singapore.
  • Field, J. (2013). Cognitive Validity. In A. Geranpayeh and L. Taylor (Eds.), Examining listening: Research and practice in assessing second language listening, Studies in language testing, 35 (pp.77-151). Cambridge: Cambridge University Press.
  • Fulcher, G. (2003). Interface design in computer-based language testing. Language Testing, 20(4), 384-408.
  • Fulcher, G. (2014). Testing second language speaking. London: Routledge.
  • Fulcher, G. & Davidson, F. (2007). Language testing and assessment. New York: Routledge.
  • Galaczi, E. D. (2010). Face-to-face and computer-based assessment of speaking: Challenges and opportunities. In L. Araújo (Ed.), Computer-based assessment of foreign language speaking skills, 29–51. Luxemburg: European Union.
  • GSE. (2015). New Global Scale of English Learning Objectives. Pearson English, New York. Retrieved May 18, 2017 from http://www.english.com/gse#
  • Gülle, T. (2015). Development of a speaking test for second language learners of Turkish. Unpublished Master’s Thesis. Boğaziçi University.
  • Jeong, T. (2003). Assessing and interpreting students’ English oral proficiency using d-VOCI in an EFL context. Unpublished doctoral dissertation. Ohio State University, Columbus.
  • Joo, M. (2008). Korean university students' attitudes to and performance on a Face-To-Face Interview (FTFI) and a Computer Administered Oral Test (CAOT). Doctorial Thesis. University of London.
  • Kenyon, D. M. & Malabonga, V. (2001). Comparing examinee attitudes toward computer-assisted and other oral proficiency assessments. Language Learning and Technology, 5(2), 60-83.
  • Kiddle, T. & Kormos, J. (2011). The effect of mode of response on a semidirect test of oral proficiency. Language Assessment Quarterly, 8(4), 342-360.
  • Khalifa, H. & Weir, C. J. (2009). Examining reading (Vol. 29). Ernst Klett Sprachen
  • Larsen-Hall, J. (2010). A guide to doing statistics in second language research. New York: Rutledge.
  • Luther, A. C. (1992). Designing interactive multimedia. New York: Multi-science Press Inc. Malabonga, V., Kenyon, D. M. & Carpenter, H. (2005). Self-assessment, preparation and response time on a computerized oral proficiency test. Language Testing, 22(1), 59-92.
  • Messick, S. (1989) Validity. In R. L Linn (Eds.), Educational measurement (pp. 13-103). New York: Macmillan/American Council on Education.
  • McNamara, T. F. (1987). Assessing the language proficiency of health professionals. Recommendations for the reform of the Occupational English Test (Report submitted to the Council of Overseas Professional Qualifications). Department of Russian and language Studies, University of Melbourne, Melbourne, Australia.
  • Mousavi, S. A. (2007). Development and validation of a multimedia computer package for the assessment of oral proficiency of adult ESL learners: implications for score comparability. Doctorial thesis. Griffith University.
  • Ockey, G. J. (2009). Developments and challenges in the use of computer-based testing for assessing second language ability. The Modern Language Journal, 93(1), 836-847.
  • O’Loughlin, K. (1997). The comparability of direct and semi-direct speaking tests: A case study. Unpublished doctoral thesis. University of Melbourne.
  • O'Sullivan, D. B. (Ed.). (2011a). Language testing: Theories and practices. Palgrave Macmillan.
  • O’Sullivan, B. (2011b). Language testing. In The Routledge handbook of applied linguistics (pp. 279-293). Routledge.
  • Öztekin, E. (2011). A comparison of computer assisted and face-to-face speaking assessment: Performance, perceptions, anxiety, and computer attitudes. MA Thesis. Bilkent Üniversitesi.
  • Qian, D. (2009). Comparing direct and semi-direct modes for speaking assessment: affective effects on test takers. Language Assessment Quarterly, 6(2), 113–125.
  • Shaw, S. D. & Weir, C. J. (2007). Examining writing: Research and practice in assessing second language writing (Vol. 26). Cambridge University Press.
  • Shneiderman, B. (2004). Designing the user interface: Strategies for effective human- computer interaction: Fourth edition. Boston: Pearson Addison-Wesley.
  • Shohamy, E., Donitsa-Schmidt, S. & Waizer, R. (1993). The effect of the elicitation mode on the language samples obtained in oral tests. Paper presented at the 15th Language Testing Research Colloquium, Cambridge, UK.
  • Thompson, G. L., Cox, T. L. & Knapp, N. (2016). Comparing the OPI and the OPIc: The effect of test method on oral proficiency scores and student preference. Foreign Language Annals, 49(1), 75-92.
  • Weir, C. J. (2005). Language testing and validation. Hampshire: Palgrave McMillan.
  • Wu, W. M. & Stansfield, C. W. (2001). Towards authenticity of task in test development. Language Testing, 18(2), 187-206.
  • Zainal Abidin, Saidatul Akmar. (2009). Testing spoken language using computer technology: A comparative validation study of 'Live' and computer delivered test versions using Weir's framework. PhD thesis, Universiti Teknologi Mara.
  • Zak, D. (2001). Programming with Microsoft Visual Basic 6.0: Enhanced edition Boston, Massachusetts: Course Technology Thomson Learning.
  • Zhou, Y. (2008). A comparison of speech samples of monologic tasks in speaking tests between computer-delivered and face-to-face modes. JLTA Journal, 11, 189-208.
  • Zhou, Y. (2009). Effects of computer delivery mode on testing second language speaking: The case of monologic tasks. Unpublished doctoral dissertation, Tokyo University of Foreign Studies Tokyo.
  • Zhou, Y. (2015). Computer-delivered or face-to-face: Effects of delivery mode on the testing of second language speaking. Language Testing in Asia, 5(1), 2.

Developing and validating a computerized oral proficiency test of English as a foreign language (Coptefl)

Yıl 2021, , 38 - 66, 15.03.2021
https://doi.org/10.21449/ijate.854678

Öz

This study is about the development and validation process of the Computerized Oral Proficiency Test of English as a Foreign Language (COPTEFL). The test aims at assessing the speaking proficiency levels of students in Anadolu University School of Foreign Languages (AUSFL). For this purpose, three monologic tasks were developed based on the Global Scale of English (GSE, 2015) level descriptors. After the development of the tasks, it was aimed to develop the COPTEFL system and then compare the test scores and test-takers’ perspectives on monologic tasks between the COPTEFL and the face-to-face speaking test. The findings from these quantitative and qualitative analyses provided substantial support for the validity and reliability of the COPTEFL and inform the further refinement of the test tasks.

Kaynakça

  • Alderson, J.C., Clapham, C. & Wall, D. (1995). Language test construction and evaluation. Cambridge: Cambridge University Press.
  • Aydın, B., Akay, E., Polat, M. & Geridönmez, S. (2016). Türkiye’deki hazırlık okullarının yeterlik sınavı uygulamaları ve bilgisayarlı dil ölçme fikrine yaklaşımları. Anadolu Üniversitesi Sosyal Bilimler Dergisi, 16(2), 1-20.
  • Aydın, B., Geridönmez, S., Polat, M. & Akay, (2017). Feasibility of computer assisted English proficiency tests in Turkey: A field study. Anadolu Üniversitesi Eğitim Bilimleri Enstitüsü Dergisi, 7(1), 107-122.
  • Bachman, L. F. & Palmer, A. S. (1996). Language testing in practice: Designing and developing useful language tests. Oxford, UK: Oxford University Press.
  • Bachman, L. F., Lynch, B. & Mason, M. (1995). Investigating variability in tasks and rater judgements in a performance test of foreign language speaking. Language Testing, 12(2), 238-57.
  • Carr, N. T. (2011). Designing and analyzing language tests: Oxford handbooks for language teachers. Oxford, UK: Oxford University Press.
  • Chapelle, C. A. (2013). Conceptions of validity. In The Routledge handbook of language testing (pp. 35-47). Routledge.
  • Clark, J. L. D. (1979). Direct vs. semi-direct tests of speaking ability. In E. J. Briere and F. B. Hinofotis (Eds.), Concepts in language testing: some recent studies, 35-49. Washington, DC: TESOL.
  • Council of Europe, (2001). Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Cambridge: Cambridge: Cambridge University Press.
  • Creswell, J. W. (2012). Educational research: Planning, conducting, and evaluating quantitative. Boston: Pearson Education.
  • East, M. (2016). Mediating Assessment Innovation: Why Stakeholder Perspectives Matter. In Assessing Foreign Language Students’ Spoken Proficiency (pp. 1-24). Springer, Singapore.
  • Field, J. (2013). Cognitive Validity. In A. Geranpayeh and L. Taylor (Eds.), Examining listening: Research and practice in assessing second language listening, Studies in language testing, 35 (pp.77-151). Cambridge: Cambridge University Press.
  • Fulcher, G. (2003). Interface design in computer-based language testing. Language Testing, 20(4), 384-408.
  • Fulcher, G. (2014). Testing second language speaking. London: Routledge.
  • Fulcher, G. & Davidson, F. (2007). Language testing and assessment. New York: Routledge.
  • Galaczi, E. D. (2010). Face-to-face and computer-based assessment of speaking: Challenges and opportunities. In L. Araújo (Ed.), Computer-based assessment of foreign language speaking skills, 29–51. Luxemburg: European Union.
  • GSE. (2015). New Global Scale of English Learning Objectives. Pearson English, New York. Retrieved May 18, 2017 from http://www.english.com/gse#
  • Gülle, T. (2015). Development of a speaking test for second language learners of Turkish. Unpublished Master’s Thesis. Boğaziçi University.
  • Jeong, T. (2003). Assessing and interpreting students’ English oral proficiency using d-VOCI in an EFL context. Unpublished doctoral dissertation. Ohio State University, Columbus.
  • Joo, M. (2008). Korean university students' attitudes to and performance on a Face-To-Face Interview (FTFI) and a Computer Administered Oral Test (CAOT). Doctorial Thesis. University of London.
  • Kenyon, D. M. & Malabonga, V. (2001). Comparing examinee attitudes toward computer-assisted and other oral proficiency assessments. Language Learning and Technology, 5(2), 60-83.
  • Kiddle, T. & Kormos, J. (2011). The effect of mode of response on a semidirect test of oral proficiency. Language Assessment Quarterly, 8(4), 342-360.
  • Khalifa, H. & Weir, C. J. (2009). Examining reading (Vol. 29). Ernst Klett Sprachen
  • Larsen-Hall, J. (2010). A guide to doing statistics in second language research. New York: Rutledge.
  • Luther, A. C. (1992). Designing interactive multimedia. New York: Multi-science Press Inc. Malabonga, V., Kenyon, D. M. & Carpenter, H. (2005). Self-assessment, preparation and response time on a computerized oral proficiency test. Language Testing, 22(1), 59-92.
  • Messick, S. (1989) Validity. In R. L Linn (Eds.), Educational measurement (pp. 13-103). New York: Macmillan/American Council on Education.
  • McNamara, T. F. (1987). Assessing the language proficiency of health professionals. Recommendations for the reform of the Occupational English Test (Report submitted to the Council of Overseas Professional Qualifications). Department of Russian and language Studies, University of Melbourne, Melbourne, Australia.
  • Mousavi, S. A. (2007). Development and validation of a multimedia computer package for the assessment of oral proficiency of adult ESL learners: implications for score comparability. Doctorial thesis. Griffith University.
  • Ockey, G. J. (2009). Developments and challenges in the use of computer-based testing for assessing second language ability. The Modern Language Journal, 93(1), 836-847.
  • O’Loughlin, K. (1997). The comparability of direct and semi-direct speaking tests: A case study. Unpublished doctoral thesis. University of Melbourne.
  • O'Sullivan, D. B. (Ed.). (2011a). Language testing: Theories and practices. Palgrave Macmillan.
  • O’Sullivan, B. (2011b). Language testing. In The Routledge handbook of applied linguistics (pp. 279-293). Routledge.
  • Öztekin, E. (2011). A comparison of computer assisted and face-to-face speaking assessment: Performance, perceptions, anxiety, and computer attitudes. MA Thesis. Bilkent Üniversitesi.
  • Qian, D. (2009). Comparing direct and semi-direct modes for speaking assessment: affective effects on test takers. Language Assessment Quarterly, 6(2), 113–125.
  • Shaw, S. D. & Weir, C. J. (2007). Examining writing: Research and practice in assessing second language writing (Vol. 26). Cambridge University Press.
  • Shneiderman, B. (2004). Designing the user interface: Strategies for effective human- computer interaction: Fourth edition. Boston: Pearson Addison-Wesley.
  • Shohamy, E., Donitsa-Schmidt, S. & Waizer, R. (1993). The effect of the elicitation mode on the language samples obtained in oral tests. Paper presented at the 15th Language Testing Research Colloquium, Cambridge, UK.
  • Thompson, G. L., Cox, T. L. & Knapp, N. (2016). Comparing the OPI and the OPIc: The effect of test method on oral proficiency scores and student preference. Foreign Language Annals, 49(1), 75-92.
  • Weir, C. J. (2005). Language testing and validation. Hampshire: Palgrave McMillan.
  • Wu, W. M. & Stansfield, C. W. (2001). Towards authenticity of task in test development. Language Testing, 18(2), 187-206.
  • Zainal Abidin, Saidatul Akmar. (2009). Testing spoken language using computer technology: A comparative validation study of 'Live' and computer delivered test versions using Weir's framework. PhD thesis, Universiti Teknologi Mara.
  • Zak, D. (2001). Programming with Microsoft Visual Basic 6.0: Enhanced edition Boston, Massachusetts: Course Technology Thomson Learning.
  • Zhou, Y. (2008). A comparison of speech samples of monologic tasks in speaking tests between computer-delivered and face-to-face modes. JLTA Journal, 11, 189-208.
  • Zhou, Y. (2009). Effects of computer delivery mode on testing second language speaking: The case of monologic tasks. Unpublished doctoral dissertation, Tokyo University of Foreign Studies Tokyo.
  • Zhou, Y. (2015). Computer-delivered or face-to-face: Effects of delivery mode on the testing of second language speaking. Language Testing in Asia, 5(1), 2.
Toplam 45 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Eğitim Üzerine Çalışmalar
Bölüm Makaleler
Yazarlar

Cemre Isler Bu kişi benim 0000-0002-3622-0756

Belgin Aydın Bu kişi benim 0000-0002-4719-7440

Yayımlanma Tarihi 15 Mart 2021
Gönderilme Tarihi 19 Mayıs 2020
Yayımlandığı Sayı Yıl 2021

Kaynak Göster

APA Isler, C., & Aydın, B. (2021). Developing and validating a computerized oral proficiency test of English as a foreign language (Coptefl). International Journal of Assessment Tools in Education, 8(1), 38-66. https://doi.org/10.21449/ijate.854678

23823             23825             23824