Direct Behavior Rating (DBR) has been proposed as a promising approach to assess student behavioral progress in classroom settings. The current study examines raters and occasions as well as their… Click to show full abstract
Direct Behavior Rating (DBR) has been proposed as a promising approach to assess student behavioral progress in classroom settings. The current study examines raters and occasions as well as their interactions as sources of error in the use of DBR–Multi-Item Scales (DBR-MIS) and DBR–Single-Item Scales (DBR-SIS) for academically engaged (AE) and disruptive behavior (DB). Furthermore, the stability of scores across two school subjects (i.e., German language, mathematics) was examined. A total of 20 students and two teachers in an inclusive elementary school classroom participated in the study. Generalizability study results suggest that little variance in DBR scores was attributable to the facets of raters or occasions, but the interactions between persons with raters and occasions accounted for a large portion of variance. Variance attributable to these interactions was slightly higher for DBR-MIS than for DBR-SIS ratings. Decision studies revealed that dependable measurements of AE could be achieved within 1.5 weeks of daily ratings for absolute decision-making. Findings regarding DB were mixed. Differences across both school subjects were found indicating that DBR ratings that are obtained in one academic setting may not necessarily be exchangeable with ratings obtained in another. Overall, the results demonstrate the utility of DBR for behavioral progress-monitoring purposes.
               
Click one of the above tabs to view related content.