This paper aims to investigate the effect of error annotation on post-editing effort and post-edited product. The study also attempts to highlight the significance of quality evaluation, particularly error annotation, which, I believe, is a useful method for learning how to work with machine translation (MT). In order to accomplish these goals, ten translation students were divided into two groups—a control group and a treatment group—in an experimental study. The control group post-edited the machine-translated subtitles of an educational video while the treatment group performed a quality evaluation prior to the task of post-editing the same content. Temporal and technical effort data (Krings 2001) of students were gathered to measure whether there was a significant difference between the two groups. In addition, the end products were examined to see if quality evaluation had a different impact on the post-editing decisions of the treatment group compared to the control group. The results show that there is a significant difference in temporal effort between the two groups—the treatment group completing the post-editing task faster—and the control group expended more technical effort than the treatment group, though the difference was not significant. The treatment group also displayed a tendency to use MT and edit more efficiently than the control group.
Primary Language | English |
---|---|
Subjects | Translation and Interpretation Studies |
Journal Section | Research Articles |
Authors | |
Publication Date | June 30, 2023 |
Published in Issue | Year 2023 Volume: 6 Issue: 1 |