Abstract Automated Writing Evaluation (AWE)systems are built by extracting features from a 30 min essay and using a statistical model that weights those features to optimally predict human scores on the… Click to show full abstract
Abstract Automated Writing Evaluation (AWE)systems are built by extracting features from a 30 min essay and using a statistical model that weights those features to optimally predict human scores on the 30 min essays. But the goal of AWE should be to predict performance in real world naturalistic tasks, not just to predict human scores on 30 min essays. Therefore, a more meaningful way of creating the feature weights in the AWE model is to select weights that are optimized to predict the real world criterion. This unique new approach was used in a sample of 194 graduate students who supplied two examples of their writing from required graduate school coursework. Contrary to results from a prior study predicting portfolio scores, the experimental model was no more effective than the traditional model in predicting scores on actual writing done in graduate school. Importantly, when the new weights were evaluated in large samples of international students, the population subgroups that were advantaged or disadvantaged by the new weights were different from the groups advantaged/disadvantaged by the traditional weights. It is critically important for any developer of AWE models to recognize that models that are equally effective in predicting an external criterion may advantage/disadvantage different groups.
               
Click one of the above tabs to view related content.