ABSTRACT Many issues are known to plague the validity of risk assessment, but empirical scholarship in criminal justice has not yet directly investigated how they influence prediction accuracy. This study… Click to show full abstract
ABSTRACT Many issues are known to plague the validity of risk assessment, but empirical scholarship in criminal justice has not yet directly investigated how they influence prediction accuracy. This study explores how frequently risk assessment staff classifies information to be insufficient and the extent to which systematic error impacts the validity of risk prediction. Using Oregon’s Juvenile Crime Prevention Risk Assessment Instrument, we identified critical missing risk information and infused the dataset with varying levels of error. Classification outcomes and overall prediction accuracy reacted unfavorably, suggesting that the common practice of knowingly and unknowingly underestimating risk may be highly pernicious.
               
Click one of the above tabs to view related content.