ABSTRACT It is common practice for K-12 schools to assess multilingual students’ language proficiency to determine language support program placement. Because such programs can provide essential scaffolding, the policies guiding… Click to show full abstract
ABSTRACT It is common practice for K-12 schools to assess multilingual students’ language proficiency to determine language support program placement. Because such programs can provide essential scaffolding, the policies guiding these assessments merit careful consideration. It is well accepted that quality assessments must be valid (representative of the constructs of interest) and reliable (error-free and consistent). However, a tension exists between validity and reliability, known as the attenuation paradox. Validity is strengthened when the range and depth of the assessed construct align with the target domain. Yet, increased domain coverage can introduce construct-irrelevant variance and greater potential for error, negatively impacting reliability. On the other hand, narrowing the assessed construct, which tends to increase reliability, also weakens validity due to construct-underrepresentation. In this paper, we revisit the validity–reliability paradox by examining initial assessment policies for K-12 English language support programs in six nations. We report on each nation's policies for language placement assessment and the associated language support programs and funding mechanisms. We compare the assessment policies on the validity and reliability spectrum, framed by Bachman's assessment use argument heuristic. We conclude with a discussion of implications related to educational equity.
               
Click one of the above tabs to view related content.