Examining the Internal Validity and Statistical Precision of the Comparative Interrupted Time Series Design by Comparison With a Randomized Experiment

Travis St.Clair, Thomas D. Cook, Kelly Hallberg

Research output: Contribution to journalArticlepeer-review

Abstract

Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment but a nonequivalent comparison group that is assessed at six time points before treatment. We estimate program effects with and without matching of the comparison schools, and we also systematically vary the number of pretest time points in the analysis. CITS designs produce impact estimates that are extremely close to the experimental benchmarks and, as implemented here, do so equally well with and without matching. Adding time points provides an advantage so long as the pretest trend differences in the treatment and comparison groups are correctly modeled. Otherwise, more time points can increase bias.

Original languageEnglish (US)
Pages (from-to)311-327
Number of pages17
JournalAmerican Journal of Evaluation
Volume35
Issue number3
DOIs
StatePublished - Sep 2014

Keywords

  • educational evaluation
  • interrupted time series
  • randomized clinical trial
  • within-study comparison

ASJC Scopus subject areas

  • Business and International Management
  • Social Psychology
  • Health(social science)
  • Education
  • Sociology and Political Science
  • Strategy and Management

Fingerprint

Dive into the research topics of 'Examining the Internal Validity and Statistical Precision of the Comparative Interrupted Time Series Design by Comparison With a Randomized Experiment'. Together they form a unique fingerprint.

Cite this