Scoring of student item response data from online courses and especially massively open online courses (MOOCs) is complicated by two challenges, potentially large amounts of missing data and allowances for multiple attempts to answer. Approaches to ability estimation with respect to both of these issues are considered using data from a large-enrollment electrical engineering MOOC. The allowance of unlimited multiple attempts sets up a range of observed score and latent-variable approaches to scoring the constructed response homework. With respect to missing data, two classical approaches are discussed, treating omitted items as incorrect or missing at random (MAR). These treatments turn out to have slightly different interpretations depending on the scoring model. In all, twelve different homework scores are proposed based on combinations of scoring model and missing data handling. The scores are computed and correlations between each score and the final exam score are compared, with attention to different populations of course participants.