Research Article
BibTex RIS Cite

Gathering evidence on e-rubrics: Perspectives and many facet Rasch analysis of rating behavior

Year 2021, , 454 - 474, 10.06.2021
https://doi.org/10.21449/ijate.818151

Abstract

This study examined the faculty perspectives towards the use of electronic rubrics and their rating behavior in a freshman composition course. A mixed-methods approach has been employed for data collection and analysis. The data for faculty perspectives were collected from nine instructors through semi-structured interviews and for their behavior, six instructors teaching the same course in Fall 2019, shared their students’ essay scores with the researchers. Many facet Rasch model (MFRM) was employed for quantitative data analysis. According to the findings of the quantitative data, the instructors differed in their degree of leniency and severity, one instructor being more lenient and one being more severe than the others. Another interesting finding was one instructor turned out to be an inconsistent user of the e-rubric. The findings of the qualitative data showed that writing faculty think e-rubrics come with great advantages such as facilitating scoring, ensuring standardization, and reducing student complaints and grade appeals. However, they view the impact of e-rubrics on student writing with cautious optimism. The findings of the qualitative and quantitative strands are overlapping, and the responses elicited from the participants seem to shed some light on the rating behavior of the writing faculty.

Supporting Institution

This work was funded by Gulf University for Science and Technology, Kuwait.

Project Number

187226

References

  • Allen, D., & Tanner, K. (2006). Rubrics: tools for making learning goals and evaluation criteria explicit for both teachers and learners. CBE Life Sciences Education, 5(3), 197-203. https://doi.org/10.1187/cbe.06-06-0168
  • Andrade, H.G. (2000). Using rubrics to promote thinking and learning. Educational Leadership, 57(5), 13-18.
  • Andrade, H. G. (2005). Teaching with rubrics: The good, the bad, and the ugly. College Teaching, 53(1), 27. https://doi.org/10.3200/CTCH.53.1.27-31
  • Anglin, L. Anglin, K., Schumann, P.L., & Kalinski, J. A. (2008). Improving the efficiency and effectiveness of grading through the use of computer‐assisted grading rubrics. Decision Sciences: Journal of Innovative Education, 6(1), 51 73. https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1540-4609.2007.00153.x
  • Arter, J., & McTighe, J. (2001). Scoring rubrics in the classroom: Using performance criteria for assessing and improving student performance. Corwin Press.
  • Atkinson, D. & Lim, S.L. (2013). Improving assessment processes in higher education: Student and teacher perceptions of the effectiveness of a rubric embedded in a LMS. Australasian Journal of Educational Technology, 29(5), 651-666. https://doi.org/10.14742/ajet.526
  • Broad, B. (2000). Pulling your hair out: "Crises of standardization in communal writing assessment". Research in the Teaching of English, 35(2), 213 260. www.jstor.org/stable/40171515
  • Brookhart, S. M. (2013). How to Create and Use Rubrics for Formative Assessment and Grading. ASCD.
  • Brookhart, S. M. (2018). Appropriate criteria: Key to effective rubrics. Frontiers in Education, 3. https://www.frontiersin.org/articles/10.3389/feduc.2018.00022/full
  • Brookhart, S. M., & Chen, F. (2015). The quality and effectiveness of descriptive rubrics. Educational Review, 67(3), 343-368. https://doi.org/10.1080/00131911.2014.929565
  • Brown, J. (2001). Using surveys in language programs. Cambridge University Press.
  • Carr, N. T. (2000). A comparison of the effects of analytic and holistic rating scale types in the context of composition tests. Issues in Applied Linguistics, 11(2), 207-241 https://escholarship.org/uc/item/4dw4z8rt
  • Creswell, J. W. (2007). Qualitative inquiry and research design: Choosing among five approaches (2nd ed.). SAGE.
  • Creswell, J. (2012). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (4th ed.). Pearson Education.
  • Creswell, J. W., & Clark, V. L. (2011). Choosing a mixed methods design. In Designing and conducting mixed methods research (3rd ed., pp. 53, 106). SAGE.
  • Dobria, L. (2011). Longitudinal rater modeling with splines. (Publication no. 3472389) [Doctoral dissertation, University of Illinois at Chicago]. ProQuest digital dissertations.
  • Eckes, T. (2011). Introduction to many-facet Rasch measurement: Analyzing and evaluating rater-mediated assessments. Peter Lang.
  • Freeman, S., & Parks, J. W. (2010). How accurate is peer grading? CBE Life Sciences Education, 9(4), 482-488. https://doi.org/10.1187/cbe.10-03-0017
  • Fulbright, S. (2018, October 18). Using rubrics as a defense against grade appeals. Faculty Focus.https://www.facultyfocus.com/articles/course-design-ideas/rubrics-as-a-defense-against-grade-appeals/
  • Garcia-Ros, R. (2011). Analysis and validation of a rubric to assess oral presentation skills in university contexts. Electronic Journal of Research in Educational Psychology, 9(3), 1043-1062.
  • Hafner, J.C., & Hafner, P.M. (2003). Quantitative analysis of the rubric as an assessment tool: An empirical study of student peer-group rating. International Journal of Science Education, 25(12), 1509-1528. https://doi.org/10.1080/0950069022000038268
  • Hicks, N., & Diefes-Dux, H. (2017). Grader consistency in using standards-based rubrics. 2017 ASEE Annual Conference & Exposition Proceedings. https://doi.org/10.18260/1-2--28416
  • Iramaneerat, C., & Yudkowsky, R. (2007). Rater errors in a clinical skills assessment of medical students. Evaluation & the Health Professions, 30(3), 266 283. https://doi.org/10.1177/0163278707304040
  • Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130 144. https://doi.org/10.1016/j.edurev.2007.05.002
  • Kirwin, J., & DiVall, M. (2015, October). Using electronic rubrics to produce actionable assessment data in a skills-based course [Conference session]. 2015 Assessment Institute in Indianapolis, Indianapolis. https://assessmentinstitute.iupui.edu/
  • Kohn, A. (2006). Speaking my mind: The trouble with rubrics. English Journal, 95(4), 12-15. https://doi.org/10.2307/30047080
  • Krane, D. (2018, August 30). Guest post: What students see in rubrics. Inside HigherEd. https://www.insidehighered.com/blogs/just-visiting/guest-post-what-students-see rubrics
  • Linacre, J. M. (2019). A user’s guide to FACETS: Rasch-model computer programs. https://www.winsteps.com/tutorials.htm
  • Linacre, J. M. (2020). FACETS (Version 3.83.2). https://www.winsteps.com/winbuy.htm
  • Linacre, J. M. (2002c). What do infit and outfit, mean-square and standardized mean? Rasch Measurement Transactions, 16, 878. https://www.rasch.org/rmt/rmt162f.htm
  • Martínez, D., Cebrián, D., & Cebrián, M. (2016). Assessment of teaching skills with e-Rubrics in Master of Teacher Training. Journal for Educators, Teachers and Trainers, 7(2). 120-141. https://jett.labosfor.com/article_855.html
  • Mertler, C. A. (2009). Action research: Teachers as researchers in the classroom (2nd ed.). SAGE.
  • Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook. SAGE.
  • Myford, C. M., & Wolfe, E. W. (2003). Detecting and measuring rater effects using many-facet Rasch measurement. Journal of Applied Measurement, 5(2), 189-223.
  • Myford, C. M., & Dobria, L. (2012). FACETS introductory workshop tutorial. University of Illinois at Chicago.
  • Nordrum, L., Evans, K., & Gustafsson, M. (2013). Comparing student learning experiences of in-text commentary and rubric-articulated feedback: Strategies for formative assessment. Assessment & Evaluation in Higher Education, 38, 919 940. https://doi.org/10.1080/02602938.2012.758229
  • Qasim, A., & Qasim, Z. (2015). Using Rubrics to Assess Writing: Pros and Cons in Pakistani Teachers’ Opinions. Journal of Literature, Languages and Linguistics, 16, 51-58. https://www.researchgate.net/publication/285815750
  • Raddawi, R., & Bilikozen, N. (2018). ELT professors’ perspectives on the use of E-rubrics in an academic writing class in a University in the UAE. Assessing EFL Writing in the 21st Century Arab World, 221-260. https://doi.org/10.1007/978-3-319-64104-1_9
  • Radnor, H. A. (2002). Researching your professional practice: Doing interpretive research. Open University Press.
  • Raposo-Rivas, M., & Gallego-Arrufat, M.J. (2016). University students’ perceptions of electronic rubric-based assessment. Digital Education Review, 30, 220-233. http://greav.ub.edu/der/
  • Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435 448. https://doi.org/10.1080/02602930902862859
  • Rivasy, M. R., De La Serna, M. C., & Martínez-Figueira, E. (2014). Electronic rubrics to assess competencies in ICT subjects. European Educational Research Journal, 13(5), 584-594. https://doi.org/10.2304/eerj.2014.13.5.584
  • Saal, F. E., Downey, R. G., & Lahey, M. A. (1980). Rating the ratings: Assessing the psychometric quality of rating data. Psychological Bulletin, 88(2), 413-428.
  • Sadler P.M., & Good E. (2006). The impact of self-and peer-grading on student learning. Educational Assessment, 11(1), 1-31. https://doi.org/10.1207/s15326977ea1101_1
  • Sharma, V. (2019). Teacher perspicacity to using rubrics in students’ EFL learning and assessment. Journal of English Language Teaching and Applied Linguistics, 1(1), 16-31. https://www.researchgate.net/publication/337771674
  • Silverman, D. (2000). Doing qualitative research: A practical handbook. SAGE.
  • Steffens, K. (2014). E-rubrics to facilitate self-regulated learning. REDU.Revista de Docencia Universitaria, 12 (1), 11-12. https://doi.org/10.4995/redu.2014.6417
  • Torrance, H. (2007). Assessment as learning? How the use of explicit learning objectives, assessment criteria and feedback in post‐secondary education and training can come to dominate learning. Assessment in Education: Principles, Policy & Practice, 14 (3), 281-294. https://doi.org/10.1080/09695940701591867
  • Wilson, M. (2007). Why I won't be using rubrics to respond to students' writing. English Journal, 96(4), 62-66. https://doi.org/10.2307/30047167

Gathering evidence on e-rubrics: Perspectives and many facet Rasch analysis of rating behavior

Year 2021, , 454 - 474, 10.06.2021
https://doi.org/10.21449/ijate.818151

Abstract

This study examined the faculty perspectives towards the use of electronic rubrics and their rating behavior in a freshman composition course. A mixed-methods approach has been employed for data collection and analysis. The data for faculty perspectives were collected from nine instructors through semi-structured interviews and for their behavior, six instructors teaching the same course in Fall 2019, shared their students’ essay scores with the researchers. Many facet Rasch model (MFRM) was employed for quantitative data analysis. According to the findings of the quantitative data, the instructors differed in their degree of leniency and severity, one instructor being more lenient and one being more severe than the others. Another interesting finding was one instructor turned out to be an inconsistent user of the e-rubric. The findings of the qualitative data showed that writing faculty think e-rubrics come with great advantages such as facilitating scoring, ensuring standardization, and reducing student complaints and grade appeals. However, they view the impact of e-rubrics on student writing with cautious optimism. The findings of the qualitative and quantitative strands are overlapping, and the responses elicited from the participants seem to shed some light on the rating behavior of the writing faculty.

Project Number

187226

References

  • Allen, D., & Tanner, K. (2006). Rubrics: tools for making learning goals and evaluation criteria explicit for both teachers and learners. CBE Life Sciences Education, 5(3), 197-203. https://doi.org/10.1187/cbe.06-06-0168
  • Andrade, H.G. (2000). Using rubrics to promote thinking and learning. Educational Leadership, 57(5), 13-18.
  • Andrade, H. G. (2005). Teaching with rubrics: The good, the bad, and the ugly. College Teaching, 53(1), 27. https://doi.org/10.3200/CTCH.53.1.27-31
  • Anglin, L. Anglin, K., Schumann, P.L., & Kalinski, J. A. (2008). Improving the efficiency and effectiveness of grading through the use of computer‐assisted grading rubrics. Decision Sciences: Journal of Innovative Education, 6(1), 51 73. https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1540-4609.2007.00153.x
  • Arter, J., & McTighe, J. (2001). Scoring rubrics in the classroom: Using performance criteria for assessing and improving student performance. Corwin Press.
  • Atkinson, D. & Lim, S.L. (2013). Improving assessment processes in higher education: Student and teacher perceptions of the effectiveness of a rubric embedded in a LMS. Australasian Journal of Educational Technology, 29(5), 651-666. https://doi.org/10.14742/ajet.526
  • Broad, B. (2000). Pulling your hair out: "Crises of standardization in communal writing assessment". Research in the Teaching of English, 35(2), 213 260. www.jstor.org/stable/40171515
  • Brookhart, S. M. (2013). How to Create and Use Rubrics for Formative Assessment and Grading. ASCD.
  • Brookhart, S. M. (2018). Appropriate criteria: Key to effective rubrics. Frontiers in Education, 3. https://www.frontiersin.org/articles/10.3389/feduc.2018.00022/full
  • Brookhart, S. M., & Chen, F. (2015). The quality and effectiveness of descriptive rubrics. Educational Review, 67(3), 343-368. https://doi.org/10.1080/00131911.2014.929565
  • Brown, J. (2001). Using surveys in language programs. Cambridge University Press.
  • Carr, N. T. (2000). A comparison of the effects of analytic and holistic rating scale types in the context of composition tests. Issues in Applied Linguistics, 11(2), 207-241 https://escholarship.org/uc/item/4dw4z8rt
  • Creswell, J. W. (2007). Qualitative inquiry and research design: Choosing among five approaches (2nd ed.). SAGE.
  • Creswell, J. (2012). Educational research: Planning, conducting, and evaluating quantitative and qualitative research (4th ed.). Pearson Education.
  • Creswell, J. W., & Clark, V. L. (2011). Choosing a mixed methods design. In Designing and conducting mixed methods research (3rd ed., pp. 53, 106). SAGE.
  • Dobria, L. (2011). Longitudinal rater modeling with splines. (Publication no. 3472389) [Doctoral dissertation, University of Illinois at Chicago]. ProQuest digital dissertations.
  • Eckes, T. (2011). Introduction to many-facet Rasch measurement: Analyzing and evaluating rater-mediated assessments. Peter Lang.
  • Freeman, S., & Parks, J. W. (2010). How accurate is peer grading? CBE Life Sciences Education, 9(4), 482-488. https://doi.org/10.1187/cbe.10-03-0017
  • Fulbright, S. (2018, October 18). Using rubrics as a defense against grade appeals. Faculty Focus.https://www.facultyfocus.com/articles/course-design-ideas/rubrics-as-a-defense-against-grade-appeals/
  • Garcia-Ros, R. (2011). Analysis and validation of a rubric to assess oral presentation skills in university contexts. Electronic Journal of Research in Educational Psychology, 9(3), 1043-1062.
  • Hafner, J.C., & Hafner, P.M. (2003). Quantitative analysis of the rubric as an assessment tool: An empirical study of student peer-group rating. International Journal of Science Education, 25(12), 1509-1528. https://doi.org/10.1080/0950069022000038268
  • Hicks, N., & Diefes-Dux, H. (2017). Grader consistency in using standards-based rubrics. 2017 ASEE Annual Conference & Exposition Proceedings. https://doi.org/10.18260/1-2--28416
  • Iramaneerat, C., & Yudkowsky, R. (2007). Rater errors in a clinical skills assessment of medical students. Evaluation & the Health Professions, 30(3), 266 283. https://doi.org/10.1177/0163278707304040
  • Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), 130 144. https://doi.org/10.1016/j.edurev.2007.05.002
  • Kirwin, J., & DiVall, M. (2015, October). Using electronic rubrics to produce actionable assessment data in a skills-based course [Conference session]. 2015 Assessment Institute in Indianapolis, Indianapolis. https://assessmentinstitute.iupui.edu/
  • Kohn, A. (2006). Speaking my mind: The trouble with rubrics. English Journal, 95(4), 12-15. https://doi.org/10.2307/30047080
  • Krane, D. (2018, August 30). Guest post: What students see in rubrics. Inside HigherEd. https://www.insidehighered.com/blogs/just-visiting/guest-post-what-students-see rubrics
  • Linacre, J. M. (2019). A user’s guide to FACETS: Rasch-model computer programs. https://www.winsteps.com/tutorials.htm
  • Linacre, J. M. (2020). FACETS (Version 3.83.2). https://www.winsteps.com/winbuy.htm
  • Linacre, J. M. (2002c). What do infit and outfit, mean-square and standardized mean? Rasch Measurement Transactions, 16, 878. https://www.rasch.org/rmt/rmt162f.htm
  • Martínez, D., Cebrián, D., & Cebrián, M. (2016). Assessment of teaching skills with e-Rubrics in Master of Teacher Training. Journal for Educators, Teachers and Trainers, 7(2). 120-141. https://jett.labosfor.com/article_855.html
  • Mertler, C. A. (2009). Action research: Teachers as researchers in the classroom (2nd ed.). SAGE.
  • Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook. SAGE.
  • Myford, C. M., & Wolfe, E. W. (2003). Detecting and measuring rater effects using many-facet Rasch measurement. Journal of Applied Measurement, 5(2), 189-223.
  • Myford, C. M., & Dobria, L. (2012). FACETS introductory workshop tutorial. University of Illinois at Chicago.
  • Nordrum, L., Evans, K., & Gustafsson, M. (2013). Comparing student learning experiences of in-text commentary and rubric-articulated feedback: Strategies for formative assessment. Assessment & Evaluation in Higher Education, 38, 919 940. https://doi.org/10.1080/02602938.2012.758229
  • Qasim, A., & Qasim, Z. (2015). Using Rubrics to Assess Writing: Pros and Cons in Pakistani Teachers’ Opinions. Journal of Literature, Languages and Linguistics, 16, 51-58. https://www.researchgate.net/publication/285815750
  • Raddawi, R., & Bilikozen, N. (2018). ELT professors’ perspectives on the use of E-rubrics in an academic writing class in a University in the UAE. Assessing EFL Writing in the 21st Century Arab World, 221-260. https://doi.org/10.1007/978-3-319-64104-1_9
  • Radnor, H. A. (2002). Researching your professional practice: Doing interpretive research. Open University Press.
  • Raposo-Rivas, M., & Gallego-Arrufat, M.J. (2016). University students’ perceptions of electronic rubric-based assessment. Digital Education Review, 30, 220-233. http://greav.ub.edu/der/
  • Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435 448. https://doi.org/10.1080/02602930902862859
  • Rivasy, M. R., De La Serna, M. C., & Martínez-Figueira, E. (2014). Electronic rubrics to assess competencies in ICT subjects. European Educational Research Journal, 13(5), 584-594. https://doi.org/10.2304/eerj.2014.13.5.584
  • Saal, F. E., Downey, R. G., & Lahey, M. A. (1980). Rating the ratings: Assessing the psychometric quality of rating data. Psychological Bulletin, 88(2), 413-428.
  • Sadler P.M., & Good E. (2006). The impact of self-and peer-grading on student learning. Educational Assessment, 11(1), 1-31. https://doi.org/10.1207/s15326977ea1101_1
  • Sharma, V. (2019). Teacher perspicacity to using rubrics in students’ EFL learning and assessment. Journal of English Language Teaching and Applied Linguistics, 1(1), 16-31. https://www.researchgate.net/publication/337771674
  • Silverman, D. (2000). Doing qualitative research: A practical handbook. SAGE.
  • Steffens, K. (2014). E-rubrics to facilitate self-regulated learning. REDU.Revista de Docencia Universitaria, 12 (1), 11-12. https://doi.org/10.4995/redu.2014.6417
  • Torrance, H. (2007). Assessment as learning? How the use of explicit learning objectives, assessment criteria and feedback in post‐secondary education and training can come to dominate learning. Assessment in Education: Principles, Policy & Practice, 14 (3), 281-294. https://doi.org/10.1080/09695940701591867
  • Wilson, M. (2007). Why I won't be using rubrics to respond to students' writing. English Journal, 96(4), 62-66. https://doi.org/10.2307/30047167
There are 49 citations in total.

Details

Primary Language English
Subjects Studies on Education
Journal Section Articles
Authors

Inan Deniz Erguvan This is me 0000-0001-8713-2935

Beyza Aksu Dünya 0000-0003-4994-1429

Project Number 187226
Publication Date June 10, 2021
Submission Date October 29, 2020
Published in Issue Year 2021

Cite

APA Erguvan, I. D., & Aksu Dünya, B. (2021). Gathering evidence on e-rubrics: Perspectives and many facet Rasch analysis of rating behavior. International Journal of Assessment Tools in Education, 8(2), 454-474. https://doi.org/10.21449/ijate.818151

23823             23825             23824