Abstract
This study explores the extent to which differences in research design explain variation in Head Start program impacts. We employ meta-analytic techniques to predict effect sizes for cognitive and achievement outcomes as a function of the type and rigor of research design, quality and type of outcome measure, activity level of control group, and attrition. Across program evaluations, the average program-level effect size was 0.27 standard deviations. About 41% of the variation in estimates across evaluations can be explained by research design features, including the extent to which the control group experienced other forms of early care or education, and 11% of the variation within programs can be explained by the quality and type of the outcome measures.
Original language | English (US) |
---|---|
Pages (from-to) | 76-95 |
Number of pages | 20 |
Journal | Educational Evaluation and Policy Analysis |
Volume | 35 |
Issue number | 1 |
DOIs | |
State | Published - Mar 2013 |
Keywords
- Head Start
- meta-analysis
- program evaluation
ASJC Scopus subject areas
- Education