Research Article

Gathering evidence on e-rubrics: Perspectives and many facet Rasch analysis of rating behavior

Volume: 8 Number: 2 June 10, 2021
TR EN

Gathering evidence on e-rubrics: Perspectives and many facet Rasch analysis of rating behavior

Abstract

This study examined the faculty perspectives towards the use of electronic rubrics and their rating behavior in a freshman composition course. A mixed-methods approach has been employed for data collection and analysis. The data for faculty perspectives were collected from nine instructors through semi-structured interviews and for their behavior, six instructors teaching the same course in Fall 2019, shared their students’ essay scores with the researchers. Many facet Rasch model (MFRM) was employed for quantitative data analysis. According to the findings of the quantitative data, the instructors differed in their degree of leniency and severity, one instructor being more lenient and one being more severe than the others. Another interesting finding was one instructor turned out to be an inconsistent user of the e-rubric. The findings of the qualitative data showed that writing faculty think e-rubrics come with great advantages such as facilitating scoring, ensuring standardization, and reducing student complaints and grade appeals. However, they view the impact of e-rubrics on student writing with cautious optimism. The findings of the qualitative and quantitative strands are overlapping, and the responses elicited from the participants seem to shed some light on the rating behavior of the writing faculty.

Keywords

Supporting Institution

This work was funded by Gulf University for Science and Technology, Kuwait.

Project Number

187226

References

  1. Allen, D., & Tanner, K. (2006). Rubrics: tools for making learning goals and evaluation criteria explicit for both teachers and learners. CBE Life Sciences Education, 5(3), 197-203. https://doi.org/10.1187/cbe.06-06-0168
  2. Andrade, H.G. (2000). Using rubrics to promote thinking and learning. Educational Leadership, 57(5), 13-18.
  3. Andrade, H. G. (2005). Teaching with rubrics: The good, the bad, and the ugly. College Teaching, 53(1), 27. https://doi.org/10.3200/CTCH.53.1.27-31
  4. Anglin, L. Anglin, K., Schumann, P.L., & Kalinski, J. A. (2008). Improving the efficiency and effectiveness of grading through the use of computer‐assisted grading rubrics. Decision Sciences: Journal of Innovative Education, 6(1), 51 73. https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1540-4609.2007.00153.x
  5. Arter, J., & McTighe, J. (2001). Scoring rubrics in the classroom: Using performance criteria for assessing and improving student performance. Corwin Press.
  6. Atkinson, D. & Lim, S.L. (2013). Improving assessment processes in higher education: Student and teacher perceptions of the effectiveness of a rubric embedded in a LMS. Australasian Journal of Educational Technology, 29(5), 651-666. https://doi.org/10.14742/ajet.526
  7. Broad, B. (2000). Pulling your hair out: "Crises of standardization in communal writing assessment". Research in the Teaching of English, 35(2), 213 260. www.jstor.org/stable/40171515
  8. Brookhart, S. M. (2013). How to Create and Use Rubrics for Formative Assessment and Grading. ASCD.

Details

Primary Language

English

Subjects

Studies on Education

Journal Section

Research Article

Publication Date

June 10, 2021

Submission Date

October 29, 2020

Acceptance Date

April 13, 2021

Published in Issue

Year 2021 Volume: 8 Number: 2

APA
Erguvan, I. D., & Aksu Dünya, B. (2021). Gathering evidence on e-rubrics: Perspectives and many facet Rasch analysis of rating behavior. International Journal of Assessment Tools in Education, 8(2), 454-474. https://doi.org/10.21449/ijate.818151

Cited By

23823             23825             23824