Abstract
Standard item response theory (IRT) models are ill-equipped for when the probability of a correct response depends on the location in the test where an item is encountered—a phenomenon we refer to as position effects. Unmodeled position effects complicate comparing respondents taking the same test. We propose a position-sensitive IRT model that is a mixture of two item response functions, capturing the difference in response probability when the item is encountered early versus late in the test. The mixing proportion depends on item location and latent person-level characteristics, separating person and item contributions to position effects. We present simulation studies outlining various features of model performance and end with an application to a large-scale admissions test with observed position effects.
Original language | English (US) |
---|---|
Journal | Journal of Educational and Behavioral Statistics |
DOIs | |
State | Accepted/In press - 2024 |
Keywords
- item response theory
- machine learning
- psychometrics
ASJC Scopus subject areas
- Education
- Social Sciences (miscellaneous)