Abstract Despite the increased use of automated writing evaluation (AWE) systems and similar programs for assessment purposes in second language (L2) writing classrooms, research on student engagement with automated feedback… Click to show full abstract
Abstract Despite the increased use of automated writing evaluation (AWE) systems and similar programs for assessment purposes in second language (L2) writing classrooms, research on student engagement with automated feedback is scarce. This naturalistic case study explored two ESL college students’ engagement with automated written corrective feedback (AWCF) provided by Grammarly when revising a final draft. Following previous research, student engagement was operationalized using three interconnected dimensions: behavioral, cognitive, and affective. Behavioral engagement was explored through the analysis of QuickTime-based screencasts of students’ Grammarly usage. Cognitive and affective engagement were measured through the analysis of students’ comments during stimulated recall of the aforementioned screencasts and semi-structured interview. Findings suggest that students had different levels of engagement with AWCF. One showed greater cognitive engagement through his questioning of AWCF. However, he did little to verify the accuracy of feedback which resulted in moderate changes to his draft. The other’s overreliance on AWCF indicated more limited cognitive engagement which led to feedback’s blind acceptance. Nevertheless, this also resulted in moderate changes to her draft.
               
Click one of the above tabs to view related content.