Abstract Background Rater variability in performance-based assessments is well-documented, yet limited research has explored this in undergraduate nursing programs. Methods A prospective follow up study design was used to determine… Click to show full abstract
Abstract Background Rater variability in performance-based assessments is well-documented, yet limited research has explored this in undergraduate nursing programs. Methods A prospective follow up study design was used to determine the extent of assessor stringency in clinical skills assessments in an undergraduate nursing program at a large multi-campus university in Sydney, Australia. Grades for students' clinical skills assessments in three units (semesters one, three and five) were extracted from an administrative database. Results were matched to student demographic data (age, gender, language spoken at home, country of birth) and the assessor. Results A total of 2339 graded clinical skills assessments of students in the undergraduate nursing program were available for analysis, representing 75% of students enrolled in three nursing skills units. Overseas-born students had lower pass grades than Australian-born students (78% vs. 85%; p χ 2 : 32.32, df : 2, p χ 2 : 17.81, df : 2, p p Conclusions The strongest predictor of a student passing their nursing skill assessment was the leniency of the assessor. A proactive approach to detecting and correcting variability in clinical skills assessments, including reviewing assessor training and support, is needed in light of the high-stakes nature of these assessments.
               
Click one of the above tabs to view related content.