LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

IRT Scoring and Test Blueprint Fidelity

Photo from wikipedia

This article focuses on the topic of how item response theory (IRT) scoring models reflect the intended content allocation in a set of test specifications or test blueprint. Although either… Click to show full abstract

This article focuses on the topic of how item response theory (IRT) scoring models reflect the intended content allocation in a set of test specifications or test blueprint. Although either an adaptive or linear assessment can be built to reflect a set of design specifications, the method of scoring is also a critical step. Standard IRT models employ a set of optimal scoring weights, and these weights depend on item parameters in the two-parameter logistic (2PL) and three-parameter logistic (3PL) models. The current article is an investigation of whether the scoring models reflect an intended set of weights defined as the proportion of item falling into each cell of the test blueprint. The 3PL model is of special interest because the optimal scoring weights depend on ability. Thus, the concern arises that for examinees of low ability, the intended weights are implicitly altered.

Keywords: scoring test; irt scoring; irt; test blueprint

Journal Title: Applied Psychological Measurement
Year Published: 2018

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.