Year 2021, Volume 8 , Issue 2, Pages 454 - 474 2021-06-10

Gathering evidence on e-rubrics: Perspectives and many facet Rasch analysis of rating behavior

Inan Deniz ERGUVAN [1] , Beyza AKSU DÜNYA [2]

This study examined the faculty perspectives towards the use of electronic rubrics and their rating behavior in a freshman composition course. A mixed-methods approach has been employed for data collection and analysis. The data for faculty perspectives were collected from nine instructors through semi-structured interviews and for their behavior, six instructors teaching the same course in Fall 2019, shared their students’ essay scores with the researchers. Many facet Rasch model (MFRM) was employed for quantitative data analysis. According to the findings of the quantitative data, the instructors differed in their degree of leniency and severity, one instructor being more lenient and one being more severe than the others. Another interesting finding was one instructor turned out to be an inconsistent user of the e-rubric. The findings of the qualitative data showed that writing faculty think e-rubrics come with great advantages such as facilitating scoring, ensuring standardization, and reducing student complaints and grade appeals. However, they view the impact of e-rubrics on student writing with cautious optimism. The findings of the qualitative and quantitative strands are overlapping, and the responses elicited from the participants seem to shed some light on the rating behavior of the writing faculty.
electronic rubric, many facet Rasch model, rater behavior, leniency and severity, consistency
  • Allen, D., & Tanner, K. (2006). Rubrics: tools for making learning goals and evaluation criteria explicit for both teachers and learners. CBE Life Sciences Education, 5(3), 197-203.
  • Andrade, H.G. (2000). Using rubrics to promote thinking and learning. Educational Leadership, 57(5), 13-18.
  • Andrade, H. G. (2005). Teaching with rubrics: The good, the bad, and the ugly. College Teaching, 53(1), 27.
  • Anglin, L. Anglin, K., Schumann, P.L., & Kalinski, J. A. (2008). Improving the efficiency and effectiveness of grading through the use of computer‐assisted grading rubrics. Decision Sciences: Journal of Innovative Education, 6(1), 51 73.
  • Arter, J., & McTighe, J. (2001). Scoring rubrics in the classroom: Using performance criteria for assessing and improving student performance. Corwin Press.
  • Atkinson, D. & Lim, S.L. (2013). Improving assessment processes in higher education: Student and teacher perceptions of the effectiveness of a rubric embedded in a LMS. Australasian Journal of Educational Technology, 29(5), 651-666.
  • Broad, B. (2000). Pulling your hair out: "Crises of standardization in communal writing assessment". Research in the Teaching of English, 35(2), 213 260.
  • Brookhart, S. M. (2013). How to Create and Use Rubrics for Formative Assessment and Grading. ASCD.
  • Brookhart, S. M. (2018). Appropriate criteria: Key to effective rubrics. Frontiers in Education, 3.
  • Brookhart, S. M., & Chen, F. (2015). The quality and effectiveness of descriptive rubrics. Educational Review, 67(3), 343-368.
  • Brown, J. (2001). Using surveys in language programs. Cambridge University Press.
  • Carr, N. T. (2000). A comparison of the effects of analytic and holistic rating scale types in the context of composition tests. Issues in Applied Linguistics, 11(2), 207-241
  • Creswell, J. W. (2007). Qualitative inquiry and research design: Choosing among five approaches (2nd ed.). SAGE.
  • Creswell, J. (2012). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (4th ed.). Pearson Education.
  • Creswell, J. W., & Clark, V. L. (2011). Choosing a mixed methods design. In Designing and conducting mixed methods research (3rd ed., pp. 53, 106). SAGE.
  • Dobria, L. (2011). Longitudinal rater modeling with splines. (Publication no. 3472389) [Doctoral dissertation, University of Illinois at Chicago]. ProQuest digital dissertations.
  • Eckes, T. (2011). Introduction to many-facet Rasch measurement: Analyzing and evaluating rater-mediated assessments. Peter Lang.
  • Freeman, S., & Parks, J. W. (2010). How accurate is peer grading? CBE Life Sciences Education, 9(4), 482-488.
  • Fulbright, S. (2018, October 18). Using rubrics as a defense against grade appeals. Faculty Focus.
  • Garcia-Ros, R. (2011). Analysis and validation of a rubric to assess oral presentation skills in university contexts. Electronic Journal of Research in Educational Psychology, 9(3), 1043-1062.
  • Hafner, J.C., & Hafner, P.M. (2003). Quantitative analysis of the rubric as an assessment tool: An empirical study of student peer-group rating. International Journal of Science Education, 25(12), 1509-1528.
  • Hicks, N., & Diefes-Dux, H. (2017). Grader consistency in using standards-based rubrics. 2017 ASEE Annual Conference & Exposition Proceedings.
  • Iramaneerat, C., & Yudkowsky, R. (2007). Rater errors in a clinical skills assessment of medical students. Evaluation & the Health Professions, 30(3), 266 283.
  • Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130 144.
  • Kirwin, J., & DiVall, M. (2015, October). Using electronic rubrics to produce actionable assessment data in a skills-based course [Conference session]. 2015 Assessment Institute in Indianapolis, Indianapolis.
  • Kohn, A. (2006). Speaking my mind: The trouble with rubrics. English Journal, 95(4), 12-15.
  • Krane, D. (2018, August 30). Guest post: What students see in rubrics. Inside HigherEd. rubrics
  • Linacre, J. M. (2019). A user’s guide to FACETS: Rasch-model computer programs.
  • Linacre, J. M. (2020). FACETS (Version 3.83.2).
  • Linacre, J. M. (2002c). What do infit and outfit, mean-square and standardized mean? Rasch Measurement Transactions, 16, 878.
  • Martínez, D., Cebrián, D., & Cebrián, M. (2016). Assessment of teaching skills with e-Rubrics in Master of Teacher Training. Journal for Educators, Teachers and Trainers, 7(2). 120-141.
  • Mertler, C. A. (2009). Action research: Teachers as researchers in the classroom (2nd ed.). SAGE.
  • Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook. SAGE.
  • Myford, C. M., & Wolfe, E. W. (2003). Detecting and measuring rater effects using many-facet Rasch measurement. Journal of Applied Measurement, 5(2), 189-223.
  • Myford, C. M., & Dobria, L. (2012). FACETS introductory workshop tutorial. University of Illinois at Chicago.
  • Nordrum, L., Evans, K., & Gustafsson, M. (2013). Comparing student learning experiences of in-text commentary and rubric-articulated feedback: Strategies for formative assessment. Assessment & Evaluation in Higher Education, 38, 919 940.
  • Qasim, A., & Qasim, Z. (2015). Using Rubrics to Assess Writing: Pros and Cons in Pakistani Teachers’ Opinions. Journal of Literature, Languages and Linguistics, 16, 51-58.
  • Raddawi, R., & Bilikozen, N. (2018). ELT professors’ perspectives on the use of E-rubrics in an academic writing class in a University in the UAE. Assessing EFL Writing in the 21st Century Arab World, 221-260.
  • Radnor, H. A. (2002). Researching your professional practice: Doing interpretive research. Open University Press.
  • Raposo-Rivas, M., & Gallego-Arrufat, M.J. (2016). University students’ perceptions of electronic rubric-based assessment. Digital Education Review, 30, 220-233.
  • Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435 448.
  • Rivasy, M. R., De La Serna, M. C., & Martínez-Figueira, E. (2014). Electronic rubrics to assess competencies in ICT subjects. European Educational Research Journal, 13(5), 584-594.
  • Saal, F. E., Downey, R. G., & Lahey, M. A. (1980). Rating the ratings: Assessing the psychometric quality of rating data. Psychological Bulletin, 88(2), 413-428.
  • Sadler P.M., & Good E. (2006). The impact of self-and peer-grading on student learning. Educational Assessment, 11(1), 1-31.
  • Sharma, V. (2019). Teacher perspicacity to using rubrics in students’ EFL learning and assessment. Journal of English Language Teaching and Applied Linguistics, 1(1), 16-31.
  • Silverman, D. (2000). Doing qualitative research: A practical handbook. SAGE.
  • Steffens, K. (2014). E-rubrics to facilitate self-regulated learning. REDU.Revista de Docencia Universitaria, 12 (1), 11-12.
  • Torrance, H. (2007). Assessment as learning? How the use of explicit learning objectives, assessment criteria and feedback in post‐secondary education and training can come to dominate learning. Assessment in Education: Principles, Policy & Practice, 14 (3), 281-294.
  • Wilson, M. (2007). Why I won't be using rubrics to respond to students' writing. English Journal, 96(4), 62-66.
Primary Language en
Subjects Education, Scientific Disciplines
Published Date June
Journal Section Articles

Orcid: 0000-0001-8713-2935
Author: Inan Deniz ERGUVAN (Primary Author)
Institution: Gulf University for Science & Technology
Country: Kuwait

Orcid: 0000-0003-4994-1429
Author: Beyza AKSU DÜNYA
Country: Turkey

Supporting Institution This work was funded by Gulf University for Science and Technology, Kuwait.
Project Number 187226

Publication Date : June 10, 2021

APA Erguvan, I , Aksu Dünya, B . (2021). Gathering evidence on e-rubrics: Perspectives and many facet Rasch analysis of rating behavior . International Journal of Assessment Tools in Education , 8 (2) , 454-474 . DOI: 10.21449/ijate.818151