Abstract
We explore the conditions under which short, comparative interrupted time-series (CITS) designs represent valid alternatives to randomized experiments in educational evaluations. To do so, we conduct three within-study comparisons, each of which uses a unique data set to test the validity of the CITS design by comparing its causal estimates to those from a randomized controlled trial (RCT) that shares the same treatment group. The degree of correspondence between RCT and CITS estimates depends on the observed pretest time trend differences and how they are modeled. Where the trend differences are clear and can be easily modeled, no bias results; where the trend differences are more volatile and cannot be easily modeled, the degree of correspondence is more mixed, and the best results come from matching comparison units on both pretest and demographic covariates.
Original language | English (US) |
---|---|
Pages (from-to) | 269-299 |
Number of pages | 31 |
Journal | Journal of Educational and Behavioral Statistics |
Volume | 41 |
Issue number | 3 |
DOIs | |
State | Published - 2016 |
Keywords
- causal inference
- comparative interrupted time series
- interrupted time series
- randomized experiment
- within-study comparison
ASJC Scopus subject areas
- Education
- Social Sciences (miscellaneous)