Managing crowded classes in terms of classroom assessment is a difficult
task due to the amount of time which needs to be devoted to providing feedback
to student products. In this respect, the present study aimed to develop an
automated essay scoring environment as a potential means to overcome this
problem. Secondarily, the study aimed to test if automatically-given scores
would correlate with the scores given by a human rater. A quantitative research
design employing a machine learning approach was preferred to meet the aims of
the study. The data set to be used for machine learning consisted of 160 scored
literary analysis essays written in an English Literature course, each essay analyzing
a theme in a given literary work. To train the automated scoring model,
LightSide software was used. First, textual features were extracted and
filtered. Then, Logistic Regression, SMO, SVO, Logistic Tree and Naïve Bayes
text classification algorithms were tested by using 10-Fold Cross-Validation to
reach the most accurate model. To see if the scores given by the computer
correlated with the scores given by the human rater, Spearman’s Rank Order
Correlation Coefficient was calculated. The results showed that none of the
algorithms were sufficiently accurate in terms of the scores of the essays
within the data set. It was also seen that the scores given by the computer
were not significantly correlated with the scores given by the human rater. The
findings implied that the size of the data collected in an authentic classroom
environment was too small for classification algorithms in terms of automated
essay scoring for classroom assessment.
Automated essay scoring Literary analysis essay Classification algorithms Machine learning
Birincil Dil | İngilizce |
---|---|
Bölüm | Makaleler |
Yazarlar | |
Yayımlanma Tarihi | 16 Ekim 2018 |
Yayımlandığı Sayı | Yıl 2018 |