Araştırma Makalesi

Examining Rater Biases of Peer Assessors in Different Assessment Environments

Cilt: 8 Sayı: 4 31 Ekim 2021
PDF İndir
EN

Examining Rater Biases of Peer Assessors in Different Assessment Environments

Abstract

The current study employed many-facet Rasch measurement (MFRM) to explain the rater bias patterns of EFL student teachers (hereafter students) when they rate the teaching performance of their peers in three assessment environments: online, face-to-face, and anonymous. Twenty-four students and two instructors rated 72 micro-teachings performed by senior Turkish students. The performance was assessed using a five-category analytic rubric developed by the researchers (Lesson Presentation, Classroom Management, Communication, Material, and Instructional Feedback). MFRM revealed the severity and leniency biases in all three assessment environments at the group and individual levels, drawing attention to the less occurrence of biases anonymous assessment. The central tendency and halo effects were observed only at the individual level in all three assessment environments, and these errors were similar to each other. Semi-structured interviews with peer raters (n = 24) documented their perspectives about how the anonymous assessment affected the severity, leniency, central tendency, and halo effects. Besides, the findings displayed that hiding the identity of the peers develops the reliability and validity of the measurements performed during peer assessment.

Keywords

Kaynakça

  1. Abu Kassim, N.L. (2007). Exploring rater judging behaviour using the many-facet Rasch model. Paper Presented in the Second Biennial International Conference on Teaching and Learning of English in Asia: Exploring New Frontiers (TELiA2), Universiti Utara, Malaysia. http://repo.uum.edu.my/3212/
  2. Anastasi, A. (1976). Psychological testing (4th ed.). Macmillan.
  3. Akpınar, M. (2019). The effect of peer assessment on pre-service teachers' teaching Practices. Education & Science, 44(200), 269-290. https://doi.org/10.15390/EB.2019.8077
  4. Baird, J. A., Hayes, M., Johnson, R., Johnson, S., & Lamprianou, I. (2013). Marker effects and examination reliability. A Comparative exploration from the perspectives of generalisability theory, Rash model and multilevel modelling. University of Oxford for Educational Assessment. Retrieved from https://dera.ioe.ac.uk/17683/1/2013-01-21-marker-effects-and-examination-reliability.pdf
  5. Barkaoui, K. (2013). Multifaceted Rasch analysis for test evaluation. The companion to language assessment, 3, 1301-1322. https://doi.org/10.1002/9781118411360.wbcla070
  6. Bennett, J. (1998). Human resources management. Prentice Hall.
  7. Bond, T., & Fox, C. M. (2015). Applying the Rasch model: Fundamental measurement in the human sciences. Routledge. https://doi.org/10.4324/9781315814698
  8. Bonk, W. J., & Ockey, G. J. (2003). A many-facet Rasch analysis of the second language group oral discussion task. Language Testing, 20(1), 89-110. https://doi.org/10.1191/0265532203lt245oa

Ayrıntılar

Birincil Dil

İngilizce

Konular

Alan Eğitimleri

Bölüm

Araştırma Makalesi

Yayımlanma Tarihi

31 Ekim 2021

Gönderilme Tarihi

7 Haziran 2021

Kabul Tarihi

22 Ağustos 2021

Yayımlandığı Sayı

Yıl 2021 Cilt: 8 Sayı: 4

Kaynak Göster

APA
Yeşilçınar, S., & Şata, M. (2021). Examining Rater Biases of Peer Assessors in Different Assessment Environments. International Journal of Psychology and Educational Studies, 8(4), 136-151. https://izlik.org/JA45TW35KZ