Abstract
Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment but a nonequivalent comparison group that is assessed at six time points before treatment. We estimate program effects with and without matching of the comparison schools, and we also systematically vary the number of pretest time points in the analysis. CITS designs produce impact estimates that are extremely close to the experimental benchmarks and, as implemented here, do so equally well with and without matching. Adding time points provides an advantage so long as the pretest trend differences in the treatment and comparison groups are correctly modeled. Otherwise, more time points can increase bias.
Original language | English (US) |
---|---|
Pages (from-to) | 311-327 |
Number of pages | 17 |
Journal | American Journal of Evaluation |
Volume | 35 |
Issue number | 3 |
DOIs | |
State | Published - Sep 2014 |
Keywords
- educational evaluation
- interrupted time series
- randomized clinical trial
- within-study comparison
ASJC Scopus subject areas
- Business and International Management
- Social Psychology
- Health(social science)
- Education
- Sociology and Political Science
- Strategy and Management