Assessing Sensitivity to Unmeasured Confounding Using a Simulated Potential Confounder

Nicole Bohme Carnegie, Masataka Harada, Jennifer L. Hill

Research output: Contribution to journalArticlepeer-review

Abstract

ABSTRACT: A major obstacle to developing evidenced-based policy is the difficulty of implementing randomized experiments to answer all causal questions of interest. When using a nonexperimental study, it is critical to assess how much the results could be affected by unmeasured confounding. We present a set of graphical and numeric tools to explore the sensitivity of causal estimates to the presence of an unmeasured confounder. We characterize the confounder through two parameters that describe the relationships between (a) the confounder and the treatment assignment and (b) the confounder and the outcome variable. Our approach has two primary advantages over similar approaches that are currently implemented in standard software. First, it can be applied to both continuous and binary treatment variables. Second, our method for binary treatment variables allows the researcher to specify three possible estimands (average treatment effect, effect of the treatment on the treated, effect of the treatment on the controls). These options are all implemented in an R package called treatSens. We demonstrate the efficacy of the method through simulations. We illustrate its potential usefulness in practice in the context of two policy applications.

Original languageEnglish (US)
Pages (from-to)395-420
Number of pages26
JournalJournal of Research on Educational Effectiveness
Volume9
Issue number3
DOIs
StatePublished - Jul 2 2016

Keywords

  • causal inference
  • hidden bias
  • omitted variable
  • sensitivity analysis
  • unmeasured confounder

ASJC Scopus subject areas

  • Education

Fingerprint

Dive into the research topics of 'Assessing Sensitivity to Unmeasured Confounding Using a Simulated Potential Confounder'. Together they form a unique fingerprint.

Cite this