TY - JOUR T1 - Home-Grown Automated Essay Scoring in the Literature Classroom: A Solution for Managing the Crowd? AU - Uzun, Kutay PY - 2018 DA - October DO - 10.30935/cet.471024 JF - Contemporary Educational Technology PB - Ali ŞİMŞEK WT - DergiPark SN - 1309-517X SP - 423 EP - 436 VL - 9 IS - 4 LA - en AB - Managing crowded classes in terms of classroom assessment is a difficulttask due to the amount of time which needs to be devoted to providing feedbackto student products. In this respect, the present study aimed to develop anautomated essay scoring environment as a potential means to overcome thisproblem. Secondarily, the study aimed to test if automatically-given scoreswould correlate with the scores given by a human rater. A quantitative researchdesign employing a machine learning approach was preferred to meet the aims ofthe study. The data set to be used for machine learning consisted of 160 scoredliterary analysis essays written in an English Literature course, each essay analyzinga theme in a given literary work. To train the automated scoring model,LightSide software was used. First, textual features were extracted andfiltered. Then, Logistic Regression, SMO, SVO, Logistic Tree and Naïve Bayestext classification algorithms were tested by using 10-Fold Cross-Validation toreach the most accurate model. To see if the scores given by the computercorrelated with the scores given by the human rater, Spearman’s Rank OrderCorrelation Coefficient was calculated. The results showed that none of thealgorithms were sufficiently accurate in terms of the scores of the essayswithin the data set. It was also seen that the scores given by the computerwere not significantly correlated with the scores given by the human rater. Thefindings implied that the size of the data collected in an authentic classroomenvironment was too small for classification algorithms in terms of automatedessay scoring for classroom assessment.  KW - Automated essay scoring KW - Literary analysis essay KW - Classification algorithms KW - Machine learning CR - Abu-Mustafa, Y. S., Magdon-Ismail, M., & Lin, H.-T. (2012). Learning from data (1st ed.). Seattle: AML Book. CR - Anderson, S.E., & Ben Jafaar, S. (2006). Policy trends in Ontario education: 1990-2003. (ICEC Working Paper #1). University of Toronto, Ontario Institute. Retrieved on 22 January 2018 from http://fcis.oise.utoronto.ca/~icec/policytrends.pdf CR - Attali, Y. & Burstein, J. (2006). Automated essay scoring with e-rater V.2. Journal of Technology, Learning, and Assessment (JTLA), 4(3). Retrieved on 11 March 2016 from http://ejournals.bc.edu/ojs/ index.php/jtla/article/view/1650 CR - Attali, Y. (2013). Validity and reliability of automated essay scoring. In M.D. Shermis & J.C. Burstein (Eds.), Handbook of automated essay evaluation: Current applications and new directions (pp. 181-198). New York, NY: Routledge. CR - Baker, N. L. (2014). “Get it off my stack”: Teachers’ tools for grading papers. Assessing Writing, 19, 36-50. doi: 10.1016/j.asw.2013.11.005. CR - Barker, T. (2011). An automated individual feedback and marking system: An empirical study. The Electronic Journal of E-Learning, 9(1), 1-14. CR - Bauer, J. (2016). A new approach: Closing the writing gap by using reliable assessment to guide and evaluate cross-curricular argumentative writing (Unpublished master’s thesis). The University of Wisconsin, USA. CR - Brookhart, S.M. & Bronowicz, D.L. (2003). “I don’t like writing. It makes my fingers hurt”: students talk about their classroom assessments. Assessment in Education, 10(2), 221-242. CR - Chih-Min, S. & Li-Yi, W. (2013). Factors affecting English language teachers’ classroom assessment practices: A case study of Singapore secondary schools. NIE research brief series. Retrieved on 12 March 2016 from http://hdl.handle.net/10497/15003 CR - Darling-Hammond, L., Chung, R., & Frelow, F. (2002). Variation in teacher preparation: How well do different pathways prepare teachers to teach? Journal of Teacher Education, 53(4), 286-302. CR - Deane, P. (2013). On the relation between automated essay scoring and modern views of the writing construct. Assessing Writing, 18(1), 7-24. CR - Dikli, S. (2006). An overview of automated scoring of essays. The Journal of Technology, Learning and Assessment, 5(1), 1-35. CR - Dornyei, Z. (2007). Research methods in applied linguistics. New York: Oxford University Press. CR - Duncan, C. R. & Noonan, B. (2007). Factors affecting teachers' grading and assessment practices. Alberta Journal of Educational Research, 53(1), 1-21. CR - Fulcher, G. & Davidson, F. (2007). Language testing and assessment: An advanced resource book. London: Routledge. CR - Gavriel, J. (2013). Assessment for learning: A wider (classroom-researched) perspective is important for formative assessment and self-directed learning in general practice. Education for Primary Care, 24(2), 93-96. CR - Greenstein, L. (2010). What teachers really need to know about formative assessment? Alexandria, VA: ASCD CR - Heilman, M. & Madnani, N. (2015). The impact of training data on automated short answer scoring performance. Proceedings from NAACL HLT: The tenth workshop on innovative use of NLP for building educational applications (pp. 81-85). The Association for Computational Linguistics: USA. Retrieved on 13 May 2016 from http://www.cs. rochester.edu/u/tetreaul/bea10proceedings.pdf# page=270 CR - Hyland, K. & Hyland, F. (Eds.). (2006). Feedback in second language writing: Contexts and issues. New York: Cambridge University Press. http://dx.doi.org/10.1017/ CBO9781139524742 CR - Imaki, J. & Ishihara, S. (2013). Experimenting with a Japanese automated essay scoring system in the L2 Japanese environment. Papers in Language Testing and Assessment, 2(2), 28-47. CR - Jung, E. (2017). A comparison of data mining methods in analyzing educational data. In J. Park, Y. Pan, G. Yi, & V. Loia (Eds.), Advances in computer science and ubiquitous computing CSA-CUTE2016 (pp. 173-178). Singapore: Springer. CR - Kumar, C. S., & Rama Sree, R. J. (2014). An attempt to improve classification accuracy through implementation of bootstrap aggregation with sequential minimal optimization during automated evaluation of descriptive answers. Indian Journal of Science and Technology, 7(9), 1369-1375. CR - Lai, Y.H. (2010). Which do students prefer to evaluate their essays: Peers or computer program? British Journal of Educational Technology, 41(3), 432-454. CR - Landauer, T. K., Laham, D. & Foltz, P. W. (2000). The intelligent essay assessor. IEEE Intelligent Systems & Their Applications, 15(5), 27-31. CR - Lee, I. (2014). Teachers’ reflection on implementation of innovative feedback approaches in EFL writing. English Teaching, 69(1), 23-40. CR - Mayfield, E. & Rose, C. P. (2013). LightSIDE: Open Source Machine Learning for Text. In M.D. Shermis& J.C. Burstein (Eds.), Handbook of automated essay evaluation: Current application and new directions (pp. 124-135). New York: Psychology Press. CR - McMillan, J. H. (2007). Classroom assessment: Principles and practice for effective standards-based instruction. Boston: Pearson. CR - Norbert, E. & Williamson, D. M. (2013). Assessing writing special issue: Assessing writing with automated scoring systems. Assessing Writing, 18(1), 1-6. CR - Nunan, D. (2010). Technology Supports for Second Language Learning. In P. Peterson, E. Baker, & B. McGaw (Eds.), International Encyclopedia of Education (pp. 204-210). Elsevier. CR - Page, E. B. (2003). Project essay grade: PEG. In M. D. Shermis & J. Burstein (Eds.), Automated essay scoring: A cross-disciplinary perspective (pp. 43-54). Mahwah, NJ: Lawrence Erlbaum Associates. CR - Paran, A. (2006). The stories of literature and language teaching. In A. Paran (Ed.), Literature in language teaching and learning (pp. 1-10). Alexandria, VA: TESOL. CR - Powers, D. E., Escoffery, D. S., & Duchnowski, M. P. (2015) Validating automated essay scoring: A (modest) refinement of the “gold standard”. Applied Measurement in Education, 28(2), 130-142. doi: 10.1080/08957347.2014.1002920 CR - Ritter, K. (2012). Ladies who don’t know us correct our papers: Postwar lay reader programs and twenty-first century contingent labor in first-year writing. College Composition and Communication, 63(3), 387-419. CR - Sakiyama, Y., Yuki, H., Moriya, T., Hattori, K., Suzuki, M., Shimada, K., & Honma, T. (2008) Predicting human liver microsomal stability with machine learning techniques. J Mol Graph Model, 26, 907-915 CR - Santos, V. D. O., Verspoor, M., & Nerbonne, J. (2012). Identifying important factors in essay grading using machine learning. In D. Tsagari, S. Papadima-Sophocleous, & S. Ioannou-Georgiou (Eds.), International experiences in language testing and assessment—Selected papers in memory of Pavlos Pavlou (pp. 295–309). Frankfurt am Main, Germany: Peter Lang GmbH. CR - Shermis, M. D. & Burstein, J. (2003). Automated essay scoring: A cross disciplinary perspective. Mahwah, NJ: Lawrence Erlbaum Associates. CR - Shermis, M. D., & Di Vesta, F. J. (2011). Classroom assessment in action. Maryland: Rowman and Little Field Publishers. CR - Shermis, M. D. & Hamner, B. (2013). Contrasting state-of-the-art automated scoring of essays. In M. D. Shermis & J. Burstein (Eds.), Handbook of automated essay evaluation: Current applications and new directions (pp. 213–246). New York, NY: Routledge. CR - Stankova, E. N., Balakshiy, A. V., Petrov, D. A., Shorov, A. V., & Korkhov, V. V. (2016). Using technologies of OLAP and machine learning for validation of the numerical models of convective clouds. Lecture Notes in Computer Science, 9788, 463-472. doi: 10.1007/ 978-3-319-42111-7_36 CR - Trivedi, S., Pardos, Z. A., & Heffernan, N. T. (2015). The utility of clustering in prediction tasks. CoRR, 1509.06163. Retrieved on 01 October 2018 from https://arxiv.org/abs/1509. 06163 CR - Viera, A. J. & Garrett, J. M. (2005). Understanding interobserver agreement: the kappa statistic. Fam Med, 37(5), 360-363. CR - Weigle, S. C. (2013). English language learners and automated scoring of essays: Critical considerations. Assessing Writing, 18(1), 85-99. CR - Westera, W., Dascalu, M., Kurvers, H., Ruseti, S., & Trausan-Matu, S. (2018). Automated essay scoring in applied games: Reducing the teacher bandwidth problem in online training. Computers & Education, 123, 212-224. https://doi.org/10.1016/j.compedu.2018.05. 010 CR - Wilson, J. (2018). Universal screening with automated essay scoring: Evaluating classification accuracy in grades 3 and 4. Journal of School Psychology, 68, 19-37. https://doi.org/ 10.1016/j.jsp.2017.12.005 CR - Yamamoto, M., Umemura, N., & Kawano, H. (2018). Automated essay scoring system based on rubric. In R. Lee (Ed.), Applied computing & information technology. ACIT 2017. Studies in computational intelligence, vol. 727 (pp. 177-190). Springer, Cham. https://doi.org/ 10.1007/978-3-319-64051-8_11 CR - Yang, M., Kim, M., Lee, H., & Rim, H. (2012). Assessing writing fluency of non-English speaking student for automated essay scoring – How to automatically evaluate the fluency in English essay. Paper presented at the 4th International Conference on Computer Supported Education. Porto, Portugal. Retrieved on 11 March 2016 from http://www. researchgate.net CR - Yang, W. (2012). A study of students’ perceptions and attitudes towards genre-based ESP writing instruction. The Asian ESP Journal, 8(3), 50-73. CR - Young, V. M., & Kim, D. H. (2010). Using assessments for instructional improvement: A literature review. Education Policy Analysis Archives, 18(19), 1-40. UR - https://doi.org/10.30935/cet.471024 L1 - https://dergipark.org.tr/en/download/article-file/554677 ER -