Item Response Models for Multiple Attempts With Incomplete Data

Yoav Bergner, Ikkyu Choi, Katherine E. Castellano

Research output: Contribution to journalArticlepeer-review


Allowance for multiple chances to answer constructed response questions is a prevalent feature in computer-based homework and exams. We consider the use of item response theory in the estimation of item characteristics and student ability when multiple attempts are allowed but no explicit penalty is deducted for extra tries. This is common practice in online formative assessments, where the number of attempts is often unlimited. In these environments, some students may not always answer-until-correct, but may rather terminate a response process after one or more incorrect tries. We contrast the cases of graded and sequential item response models, both unidimensional models which do not explicitly account for factors other than ability. These approaches differ not only in terms of log-odds assumptions but, importantly, in terms of handling incomplete data. We explore the consequences of model misspecification through a simulation study and with four online homework data sets. Our results suggest that model selection is insensitive for complete data, but quite sensitive to whether missing responses are regarded as informative (of inability) or not (e.g., missing at random). Under realistic conditions, a sequential model with similar parametric degrees of freedom to a graded model can account for more response patterns and outperforms the latter in terms of model fit.

Original languageEnglish (US)
Pages (from-to)415-436
Number of pages22
JournalJournal of Educational Measurement
Issue number2
StatePublished - Jun 1 2019

ASJC Scopus subject areas

  • Education
  • Developmental and Educational Psychology
  • Applied Psychology
  • Psychology (miscellaneous)


Dive into the research topics of 'Item Response Models for Multiple Attempts With Incomplete Data'. Together they form a unique fingerprint.

Cite this