Abstract
A substantial number of randomized trials of educational interventions that have been conducted over the past two decades have produced null results, with either no impact or an unreliable estimate of impact on student achievement or other outcomes of interest. The investment of time and money spent implementing such trials warrants more useful information than simply “this didn’t work.” In this article, we propose a framework for defining null results and interpreting them and then propose a method for systematically examining a set of potential reasons for a study’s findings. The article builds on prior work on the topic and synthesizes it into a common framework designed to help the field improve both the design and interpretation of randomized trials.
Original language | English (US) |
---|---|
Pages (from-to) | 580-589 |
Number of pages | 10 |
Journal | Educational Researcher |
Volume | 48 |
Issue number | 9 |
DOIs | |
State | Published - Dec 1 2019 |
Keywords
- evaluation
- experimental design
- experimental research
- planning
- program evaluation
- research utilization
ASJC Scopus subject areas
- Education