Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2018, Cilt: 9 Sayı: 4, 423 - 436, 16.10.2018
https://doi.org/10.30935/cet.471024

Öz

Kaynakça

  • Abu-Mustafa, Y. S., Magdon-Ismail, M., & Lin, H.-T. (2012). Learning from data (1st ed.). Seattle: AML Book.
  • Anderson, S.E., & Ben Jafaar, S. (2006). Policy trends in Ontario education: 1990-2003. (ICEC Working Paper #1). University of Toronto, Ontario Institute. Retrieved on 22 January 2018 from http://fcis.oise.utoronto.ca/~icec/policytrends.pdf
  • Attali, Y. & Burstein, J. (2006). Automated essay scoring with e-rater V.2. Journal of Technology, Learning, and Assessment (JTLA), 4(3). Retrieved on 11 March 2016 from http://ejournals.bc.edu/ojs/ index.php/jtla/article/view/1650
  • Attali, Y. (2013). Validity and reliability of automated essay scoring. In M.D. Shermis & J.C. Burstein (Eds.), Handbook of automated essay evaluation: Current applications and new directions (pp. 181-198). New York, NY: Routledge.
  • Baker, N. L. (2014). “Get it off my stack”: Teachers’ tools for grading papers. Assessing Writing, 19, 36-50. doi: 10.1016/j.asw.2013.11.005.
  • Barker, T. (2011). An automated individual feedback and marking system: An empirical study. The Electronic Journal of E-Learning, 9(1), 1-14.
  • Bauer, J. (2016). A new approach: Closing the writing gap by using reliable assessment to guide and evaluate cross-curricular argumentative writing (Unpublished master’s thesis). The University of Wisconsin, USA.
  • Brookhart, S.M. & Bronowicz, D.L. (2003). “I don’t like writing. It makes my fingers hurt”: students talk about their classroom assessments. Assessment in Education, 10(2), 221-242.
  • Chih-Min, S. & Li-Yi, W. (2013). Factors affecting English language teachers’ classroom assessment practices: A case study of Singapore secondary schools. NIE research brief series. Retrieved on 12 March 2016 from http://hdl.handle.net/10497/15003
  • Darling-Hammond, L., Chung, R., & Frelow, F. (2002). Variation in teacher preparation: How well do different pathways prepare teachers to teach? Journal of Teacher Education, 53(4), 286-302.
  • Deane, P. (2013). On the relation between automated essay scoring and modern views of the writing construct. Assessing Writing, 18(1), 7-24.
  • Dikli, S. (2006). An overview of automated scoring of essays. The Journal of Technology, Learning and Assessment, 5(1), 1-35.
  • Dornyei, Z. (2007). Research methods in applied linguistics. New York: Oxford University Press.
  • Duncan, C. R. & Noonan, B. (2007). Factors affecting teachers' grading and assessment practices. Alberta Journal of Educational Research, 53(1), 1-21.
  • Fulcher, G. & Davidson, F. (2007). Language testing and assessment: An advanced resource book. London: Routledge.
  • Gavriel, J. (2013). Assessment for learning: A wider (classroom-researched) perspective is important for formative assessment and self-directed learning in general practice. Education for Primary Care, 24(2), 93-96.
  • Greenstein, L. (2010). What teachers really need to know about formative assessment? Alexandria, VA: ASCD
  • Heilman, M. & Madnani, N. (2015). The impact of training data on automated short answer scoring performance. Proceedings from NAACL HLT: The tenth workshop on innovative use of NLP for building educational applications (pp. 81-85). The Association for Computational Linguistics: USA. Retrieved on 13 May 2016 from http://www.cs. rochester.edu/u/tetreaul/bea10proceedings.pdf# page=270
  • Hyland, K. & Hyland, F. (Eds.). (2006). Feedback in second language writing: Contexts and issues. New York: Cambridge University Press. http://dx.doi.org/10.1017/ CBO9781139524742
  • Imaki, J. & Ishihara, S. (2013). Experimenting with a Japanese automated essay scoring system in the L2 Japanese environment. Papers in Language Testing and Assessment, 2(2), 28-47.
  • Jung, E. (2017). A comparison of data mining methods in analyzing educational data. In J. Park, Y. Pan, G. Yi, & V. Loia (Eds.), Advances in computer science and ubiquitous computing CSA-CUTE2016 (pp. 173-178). Singapore: Springer.
  • Kumar, C. S., & Rama Sree, R. J. (2014). An attempt to improve classification accuracy through implementation of bootstrap aggregation with sequential minimal optimization during automated evaluation of descriptive answers. Indian Journal of Science and Technology, 7(9), 1369-1375.
  • Lai, Y.H. (2010). Which do students prefer to evaluate their essays: Peers or computer program? British Journal of Educational Technology, 41(3), 432-454.
  • Landauer, T. K., Laham, D. & Foltz, P. W. (2000). The intelligent essay assessor. IEEE Intelligent Systems & Their Applications, 15(5), 27-31.
  • Lee, I. (2014). Teachers’ reflection on implementation of innovative feedback approaches in EFL writing. English Teaching, 69(1), 23-40.
  • Mayfield, E. & Rose, C. P. (2013). LightSIDE: Open Source Machine Learning for Text. In M.D. Shermis& J.C. Burstein (Eds.), Handbook of automated essay evaluation: Current application and new directions (pp. 124-135). New York: Psychology Press.
  • McMillan, J. H. (2007). Classroom assessment: Principles and practice for effective standards-based instruction. Boston: Pearson.
  • Norbert, E. & Williamson, D. M. (2013). Assessing writing special issue: Assessing writing with automated scoring systems. Assessing Writing, 18(1), 1-6.
  • Nunan, D. (2010). Technology Supports for Second Language Learning. In P. Peterson, E. Baker, & B. McGaw (Eds.), International Encyclopedia of Education (pp. 204-210). Elsevier.
  • Page, E. B. (2003). Project essay grade: PEG. In M. D. Shermis & J. Burstein (Eds.), Automated essay scoring: A cross-disciplinary perspective (pp. 43-54). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Paran, A. (2006). The stories of literature and language teaching. In A. Paran (Ed.), Literature in language teaching and learning (pp. 1-10). Alexandria, VA: TESOL.
  • Powers, D. E., Escoffery, D. S., & Duchnowski, M. P. (2015) Validating automated essay scoring: A (modest) refinement of the “gold standard”. Applied Measurement in Education, 28(2), 130-142. doi: 10.1080/08957347.2014.1002920
  • Ritter, K. (2012). Ladies who don’t know us correct our papers: Postwar lay reader programs and twenty-first century contingent labor in first-year writing. College Composition and Communication, 63(3), 387-419.
  • Sakiyama, Y., Yuki, H., Moriya, T., Hattori, K., Suzuki, M., Shimada, K., & Honma, T. (2008) Predicting human liver microsomal stability with machine learning techniques. J Mol Graph Model, 26, 907-915
  • Santos, V. D. O., Verspoor, M., & Nerbonne, J. (2012). Identifying important factors in essay grading using machine learning. In D. Tsagari, S. Papadima-Sophocleous, & S. Ioannou-Georgiou (Eds.), International experiences in language testing and assessment—Selected papers in memory of Pavlos Pavlou (pp. 295–309). Frankfurt am Main, Germany: Peter Lang GmbH.
  • Shermis, M. D. & Burstein, J. (2003). Automated essay scoring: A cross disciplinary perspective. Mahwah, NJ: Lawrence Erlbaum Associates.
  • Shermis, M. D., & Di Vesta, F. J. (2011). Classroom assessment in action. Maryland: Rowman and Little Field Publishers.
  • Shermis, M. D. & Hamner, B. (2013). Contrasting state-of-the-art automated scoring of essays. In M. D. Shermis & J. Burstein (Eds.), Handbook of automated essay evaluation: Current applications and new directions (pp. 213–246). New York, NY: Routledge.
  • Stankova, E. N., Balakshiy, A. V., Petrov, D. A., Shorov, A. V., & Korkhov, V. V. (2016). Using technologies of OLAP and machine learning for validation of the numerical models of convective clouds. Lecture Notes in Computer Science, 9788, 463-472. doi: 10.1007/ 978-3-319-42111-7_36
  • Trivedi, S., Pardos, Z. A., & Heffernan, N. T. (2015). The utility of clustering in prediction tasks. CoRR, 1509.06163. Retrieved on 01 October 2018 from https://arxiv.org/abs/1509. 06163
  • Viera, A. J. & Garrett, J. M. (2005). Understanding interobserver agreement: the kappa statistic. Fam Med, 37(5), 360-363.
  • Weigle, S. C. (2013). English language learners and automated scoring of essays: Critical considerations. Assessing Writing, 18(1), 85-99.
  • Westera, W., Dascalu, M., Kurvers, H., Ruseti, S., & Trausan-Matu, S. (2018). Automated essay scoring in applied games: Reducing the teacher bandwidth problem in online training. Computers & Education, 123, 212-224. https://doi.org/10.1016/j.compedu.2018.05. 010
  • Wilson, J. (2018). Universal screening with automated essay scoring: Evaluating classification accuracy in grades 3 and 4. Journal of School Psychology, 68, 19-37. https://doi.org/ 10.1016/j.jsp.2017.12.005
  • Yamamoto, M., Umemura, N., & Kawano, H. (2018). Automated essay scoring system based on rubric. In R. Lee (Ed.), Applied computing & information technology. ACIT 2017. Studies in computational intelligence, vol. 727 (pp. 177-190). Springer, Cham. https://doi.org/ 10.1007/978-3-319-64051-8_11
  • Yang, M., Kim, M., Lee, H., & Rim, H. (2012). Assessing writing fluency of non-English speaking student for automated essay scoring – How to automatically evaluate the fluency in English essay. Paper presented at the 4th International Conference on Computer Supported Education. Porto, Portugal. Retrieved on 11 March 2016 from http://www. researchgate.net
  • Yang, W. (2012). A study of students’ perceptions and attitudes towards genre-based ESP writing instruction. The Asian ESP Journal, 8(3), 50-73.
  • Young, V. M., & Kim, D. H. (2010). Using assessments for instructional improvement: A literature review. Education Policy Analysis Archives, 18(19), 1-40.

Home-Grown Automated Essay Scoring in the Literature Classroom: A Solution for Managing the Crowd?

Yıl 2018, Cilt: 9 Sayı: 4, 423 - 436, 16.10.2018
https://doi.org/10.30935/cet.471024

Öz

Managing crowded classes in terms of classroom assessment is a difficult
task due to the amount of time which needs to be devoted to providing feedback
to student products. In this respect, the present study aimed to develop an
automated essay scoring environment as a potential means to overcome this
problem. Secondarily, the study aimed to test if automatically-given scores
would correlate with the scores given by a human rater. A quantitative research
design employing a machine learning approach was preferred to meet the aims of
the study. The data set to be used for machine learning consisted of 160 scored
literary analysis essays written in an English Literature course, each essay analyzing
a theme in a given literary work. To train the automated scoring model,
LightSide software was used. First, textual features were extracted and
filtered. Then, Logistic Regression, SMO, SVO, Logistic Tree and Naïve Bayes
text classification algorithms were tested by using 10-Fold Cross-Validation to
reach the most accurate model. To see if the scores given by the computer
correlated with the scores given by the human rater, Spearman’s Rank Order
Correlation Coefficient was calculated. The results showed that none of the
algorithms were sufficiently accurate in terms of the scores of the essays
within the data set. It was also seen that the scores given by the computer
were not significantly correlated with the scores given by the human rater. The
findings implied that the size of the data collected in an authentic classroom
environment was too small for classification algorithms in terms of automated
essay scoring for classroom assessment. 

Kaynakça

  • Abu-Mustafa, Y. S., Magdon-Ismail, M., & Lin, H.-T. (2012). Learning from data (1st ed.). Seattle: AML Book.
  • Anderson, S.E., & Ben Jafaar, S. (2006). Policy trends in Ontario education: 1990-2003. (ICEC Working Paper #1). University of Toronto, Ontario Institute. Retrieved on 22 January 2018 from http://fcis.oise.utoronto.ca/~icec/policytrends.pdf
  • Attali, Y. & Burstein, J. (2006). Automated essay scoring with e-rater V.2. Journal of Technology, Learning, and Assessment (JTLA), 4(3). Retrieved on 11 March 2016 from http://ejournals.bc.edu/ojs/ index.php/jtla/article/view/1650
  • Attali, Y. (2013). Validity and reliability of automated essay scoring. In M.D. Shermis & J.C. Burstein (Eds.), Handbook of automated essay evaluation: Current applications and new directions (pp. 181-198). New York, NY: Routledge.
  • Baker, N. L. (2014). “Get it off my stack”: Teachers’ tools for grading papers. Assessing Writing, 19, 36-50. doi: 10.1016/j.asw.2013.11.005.
  • Barker, T. (2011). An automated individual feedback and marking system: An empirical study. The Electronic Journal of E-Learning, 9(1), 1-14.
  • Bauer, J. (2016). A new approach: Closing the writing gap by using reliable assessment to guide and evaluate cross-curricular argumentative writing (Unpublished master’s thesis). The University of Wisconsin, USA.
  • Brookhart, S.M. & Bronowicz, D.L. (2003). “I don’t like writing. It makes my fingers hurt”: students talk about their classroom assessments. Assessment in Education, 10(2), 221-242.
  • Chih-Min, S. & Li-Yi, W. (2013). Factors affecting English language teachers’ classroom assessment practices: A case study of Singapore secondary schools. NIE research brief series. Retrieved on 12 March 2016 from http://hdl.handle.net/10497/15003
  • Darling-Hammond, L., Chung, R., & Frelow, F. (2002). Variation in teacher preparation: How well do different pathways prepare teachers to teach? Journal of Teacher Education, 53(4), 286-302.
  • Deane, P. (2013). On the relation between automated essay scoring and modern views of the writing construct. Assessing Writing, 18(1), 7-24.
  • Dikli, S. (2006). An overview of automated scoring of essays. The Journal of Technology, Learning and Assessment, 5(1), 1-35.
  • Dornyei, Z. (2007). Research methods in applied linguistics. New York: Oxford University Press.
  • Duncan, C. R. & Noonan, B. (2007). Factors affecting teachers' grading and assessment practices. Alberta Journal of Educational Research, 53(1), 1-21.
  • Fulcher, G. & Davidson, F. (2007). Language testing and assessment: An advanced resource book. London: Routledge.
  • Gavriel, J. (2013). Assessment for learning: A wider (classroom-researched) perspective is important for formative assessment and self-directed learning in general practice. Education for Primary Care, 24(2), 93-96.
  • Greenstein, L. (2010). What teachers really need to know about formative assessment? Alexandria, VA: ASCD
  • Heilman, M. & Madnani, N. (2015). The impact of training data on automated short answer scoring performance. Proceedings from NAACL HLT: The tenth workshop on innovative use of NLP for building educational applications (pp. 81-85). The Association for Computational Linguistics: USA. Retrieved on 13 May 2016 from http://www.cs. rochester.edu/u/tetreaul/bea10proceedings.pdf# page=270
  • Hyland, K. & Hyland, F. (Eds.). (2006). Feedback in second language writing: Contexts and issues. New York: Cambridge University Press. http://dx.doi.org/10.1017/ CBO9781139524742
  • Imaki, J. & Ishihara, S. (2013). Experimenting with a Japanese automated essay scoring system in the L2 Japanese environment. Papers in Language Testing and Assessment, 2(2), 28-47.
  • Jung, E. (2017). A comparison of data mining methods in analyzing educational data. In J. Park, Y. Pan, G. Yi, & V. Loia (Eds.), Advances in computer science and ubiquitous computing CSA-CUTE2016 (pp. 173-178). Singapore: Springer.
  • Kumar, C. S., & Rama Sree, R. J. (2014). An attempt to improve classification accuracy through implementation of bootstrap aggregation with sequential minimal optimization during automated evaluation of descriptive answers. Indian Journal of Science and Technology, 7(9), 1369-1375.
  • Lai, Y.H. (2010). Which do students prefer to evaluate their essays: Peers or computer program? British Journal of Educational Technology, 41(3), 432-454.
  • Landauer, T. K., Laham, D. & Foltz, P. W. (2000). The intelligent essay assessor. IEEE Intelligent Systems & Their Applications, 15(5), 27-31.
  • Lee, I. (2014). Teachers’ reflection on implementation of innovative feedback approaches in EFL writing. English Teaching, 69(1), 23-40.
  • Mayfield, E. & Rose, C. P. (2013). LightSIDE: Open Source Machine Learning for Text. In M.D. Shermis& J.C. Burstein (Eds.), Handbook of automated essay evaluation: Current application and new directions (pp. 124-135). New York: Psychology Press.
  • McMillan, J. H. (2007). Classroom assessment: Principles and practice for effective standards-based instruction. Boston: Pearson.
  • Norbert, E. & Williamson, D. M. (2013). Assessing writing special issue: Assessing writing with automated scoring systems. Assessing Writing, 18(1), 1-6.
  • Nunan, D. (2010). Technology Supports for Second Language Learning. In P. Peterson, E. Baker, & B. McGaw (Eds.), International Encyclopedia of Education (pp. 204-210). Elsevier.
  • Page, E. B. (2003). Project essay grade: PEG. In M. D. Shermis & J. Burstein (Eds.), Automated essay scoring: A cross-disciplinary perspective (pp. 43-54). Mahwah, NJ: Lawrence Erlbaum Associates.
  • Paran, A. (2006). The stories of literature and language teaching. In A. Paran (Ed.), Literature in language teaching and learning (pp. 1-10). Alexandria, VA: TESOL.
  • Powers, D. E., Escoffery, D. S., & Duchnowski, M. P. (2015) Validating automated essay scoring: A (modest) refinement of the “gold standard”. Applied Measurement in Education, 28(2), 130-142. doi: 10.1080/08957347.2014.1002920
  • Ritter, K. (2012). Ladies who don’t know us correct our papers: Postwar lay reader programs and twenty-first century contingent labor in first-year writing. College Composition and Communication, 63(3), 387-419.
  • Sakiyama, Y., Yuki, H., Moriya, T., Hattori, K., Suzuki, M., Shimada, K., & Honma, T. (2008) Predicting human liver microsomal stability with machine learning techniques. J Mol Graph Model, 26, 907-915
  • Santos, V. D. O., Verspoor, M., & Nerbonne, J. (2012). Identifying important factors in essay grading using machine learning. In D. Tsagari, S. Papadima-Sophocleous, & S. Ioannou-Georgiou (Eds.), International experiences in language testing and assessment—Selected papers in memory of Pavlos Pavlou (pp. 295–309). Frankfurt am Main, Germany: Peter Lang GmbH.
  • Shermis, M. D. & Burstein, J. (2003). Automated essay scoring: A cross disciplinary perspective. Mahwah, NJ: Lawrence Erlbaum Associates.
  • Shermis, M. D., & Di Vesta, F. J. (2011). Classroom assessment in action. Maryland: Rowman and Little Field Publishers.
  • Shermis, M. D. & Hamner, B. (2013). Contrasting state-of-the-art automated scoring of essays. In M. D. Shermis & J. Burstein (Eds.), Handbook of automated essay evaluation: Current applications and new directions (pp. 213–246). New York, NY: Routledge.
  • Stankova, E. N., Balakshiy, A. V., Petrov, D. A., Shorov, A. V., & Korkhov, V. V. (2016). Using technologies of OLAP and machine learning for validation of the numerical models of convective clouds. Lecture Notes in Computer Science, 9788, 463-472. doi: 10.1007/ 978-3-319-42111-7_36
  • Trivedi, S., Pardos, Z. A., & Heffernan, N. T. (2015). The utility of clustering in prediction tasks. CoRR, 1509.06163. Retrieved on 01 October 2018 from https://arxiv.org/abs/1509. 06163
  • Viera, A. J. & Garrett, J. M. (2005). Understanding interobserver agreement: the kappa statistic. Fam Med, 37(5), 360-363.
  • Weigle, S. C. (2013). English language learners and automated scoring of essays: Critical considerations. Assessing Writing, 18(1), 85-99.
  • Westera, W., Dascalu, M., Kurvers, H., Ruseti, S., & Trausan-Matu, S. (2018). Automated essay scoring in applied games: Reducing the teacher bandwidth problem in online training. Computers & Education, 123, 212-224. https://doi.org/10.1016/j.compedu.2018.05. 010
  • Wilson, J. (2018). Universal screening with automated essay scoring: Evaluating classification accuracy in grades 3 and 4. Journal of School Psychology, 68, 19-37. https://doi.org/ 10.1016/j.jsp.2017.12.005
  • Yamamoto, M., Umemura, N., & Kawano, H. (2018). Automated essay scoring system based on rubric. In R. Lee (Ed.), Applied computing & information technology. ACIT 2017. Studies in computational intelligence, vol. 727 (pp. 177-190). Springer, Cham. https://doi.org/ 10.1007/978-3-319-64051-8_11
  • Yang, M., Kim, M., Lee, H., & Rim, H. (2012). Assessing writing fluency of non-English speaking student for automated essay scoring – How to automatically evaluate the fluency in English essay. Paper presented at the 4th International Conference on Computer Supported Education. Porto, Portugal. Retrieved on 11 March 2016 from http://www. researchgate.net
  • Yang, W. (2012). A study of students’ perceptions and attitudes towards genre-based ESP writing instruction. The Asian ESP Journal, 8(3), 50-73.
  • Young, V. M., & Kim, D. H. (2010). Using assessments for instructional improvement: A literature review. Education Policy Analysis Archives, 18(19), 1-40.
Toplam 48 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Bölüm Makaleler
Yazarlar

Kutay Uzun

Yayımlanma Tarihi 16 Ekim 2018
Yayımlandığı Sayı Yıl 2018 Cilt: 9 Sayı: 4

Kaynak Göster

APA Uzun, K. (2018). Home-Grown Automated Essay Scoring in the Literature Classroom: A Solution for Managing the Crowd?. Contemporary Educational Technology, 9(4), 423-436. https://doi.org/10.30935/cet.471024
AMA Uzun K. Home-Grown Automated Essay Scoring in the Literature Classroom: A Solution for Managing the Crowd?. Contemporary Educational Technology. Ekim 2018;9(4):423-436. doi:10.30935/cet.471024
Chicago Uzun, Kutay. “Home-Grown Automated Essay Scoring in the Literature Classroom: A Solution for Managing the Crowd?”. Contemporary Educational Technology 9, sy. 4 (Ekim 2018): 423-36. https://doi.org/10.30935/cet.471024.
EndNote Uzun K (01 Ekim 2018) Home-Grown Automated Essay Scoring in the Literature Classroom: A Solution for Managing the Crowd?. Contemporary Educational Technology 9 4 423–436.
IEEE K. Uzun, “Home-Grown Automated Essay Scoring in the Literature Classroom: A Solution for Managing the Crowd?”, Contemporary Educational Technology, c. 9, sy. 4, ss. 423–436, 2018, doi: 10.30935/cet.471024.
ISNAD Uzun, Kutay. “Home-Grown Automated Essay Scoring in the Literature Classroom: A Solution for Managing the Crowd?”. Contemporary Educational Technology 9/4 (Ekim 2018), 423-436. https://doi.org/10.30935/cet.471024.
JAMA Uzun K. Home-Grown Automated Essay Scoring in the Literature Classroom: A Solution for Managing the Crowd?. Contemporary Educational Technology. 2018;9:423–436.
MLA Uzun, Kutay. “Home-Grown Automated Essay Scoring in the Literature Classroom: A Solution for Managing the Crowd?”. Contemporary Educational Technology, c. 9, sy. 4, 2018, ss. 423-36, doi:10.30935/cet.471024.
Vancouver Uzun K. Home-Grown Automated Essay Scoring in the Literature Classroom: A Solution for Managing the Crowd?. Contemporary Educational Technology. 2018;9(4):423-36.