Araştırma Makalesi

Is Cross-Marking A Way To Increase Rater Reliability?

Cilt: 6 Sayı: 3 30 Eylül 2018
PDF İndir
TR EN

Is Cross-Marking A Way To Increase Rater Reliability?

Abstract

Most of the error correction research has focused on whether teachers should correct errors in student writing, how they should do it and how deep it should be. Recent research, thus, has mostly focused on pedagogical merits of error correction and its possible benefits for student learning. However, in some particular contexts where graders make multiple scorings on the same paper, not much has been investigated to see if those corrections manipulate other graders or whether the writing teachers’ corrections on students’ papers have a positive or negative impact on the reliability of the scores when raters see the corrections of the other graders on the papers they mark. This study intended to explore whether corrections made by the graders affect the scores of colleagues who are scoring the same papers second time to gain more accurate results and to ensure the rating reliability. To do that, 12 writing teachers graded 20 student essays written by intermediate level English learners. The participants were first asked to grade 10 papers without doing error correction and those papers were re-scored after 3 weeks by the same graders, inter-rater and intra-rater reliability computations were carried out for this set of papers to see the actual reliability levels of the raters under normal circumstances. In the second stage, the graders were asked to score the other 10 papers, but this time they also made error corrections on the papers and after 3 weeks, the same teachers graded the same papers that were corrected by their pair graders. The scores assigned each time to these papers by the same raters, were compared statistically and the effect of error correction was investigated on their scores. In conclusion, the results revealed that error marking and grader comment on writing papers may have a negative effect on raters’ intra-rater reliability levels whereas it could have a positive effect on raters’ inter-rater reliability levels when a pool of raters grade the same papers.

Keywords

Kaynakça

  1. Bell, R.C. (1980). Problems in improving the reliability of essay marks. Assessment & Evaluation in Higher Education. 5(3): 254-263.
  2. Bloxham, S. (2009). Marking and Moderation in the UK: False Assumptions and Wasted Resources. Assessment & Evaluation in Higher Education. 34(2):209-220.
  3. Brown, G. (2009). The reliability of essay scores: The necessity of rubrics and moderation. In Tertiary Assessment and Higher Education Student Outcomes: Policy, Practice, and Research. Editors: Meyer L, Davidson S, Anderson H, Fletcher R, Johnston PM, Rees M. 43-50. Ako Aotearoa - The National Centre for Tertiary Teaching, Wellington, NZ 2009.
  4. Brown, G., Glasswell, K., Harland, D. (2004). Accuracy in the scoring of writing: Studies of reliability and validity using a New Zealand writing assessment system. Assessing Writing. 9 (2004):105-121.
  5. Caryl, P. G. (1999). Psychology examiners re-examined: A 5-year perspective. Studies in Higher Education. 24 (1): 61-74.

Ayrıntılar

Birincil Dil

İngilizce

Konular

Türkçe ve Sosyal Bilimler Eğitimi (Diğer)

Bölüm

Araştırma Makalesi

Yazarlar

Yayımlanma Tarihi

30 Eylül 2018

Gönderilme Tarihi

17 Ağustos 2018

Kabul Tarihi

30 Eylül 2018

Yayımlandığı Sayı

Yıl 2018 Cilt: 6 Sayı: 3

Kaynak Göster

APA
Polat, M. (2018). Is Cross-Marking A Way To Increase Rater Reliability? International Journal of Languages’ Education and Teaching, 6(3), 331-346. https://izlik.org/JA73DM75YR