Why We (Usually) Don't Have to Worry About Multiple Comparisons

Andrew Gelman, Jennifer Hill, Masanao Yajima

Research output: Contribution to journalArticlepeer-review


Applied researchers often find themselves making statistical inferences in settings that would seem to require multiple comparisons adjustments. We challenge the Type I error paradigm that underlies these corrections. Moreover we posit that the problem of multiple comparisons can disappear entirely when viewed from a hierarchical Bayesian perspective. We propose building multilevel models in the settings where multiple comparisons arise. Multilevel models perform partial pooling (shifting estimates toward each other), whereas classical procedures typically keep the centers of intervals stationary, adjusting for multiple comparisons by making the intervals wider (or, equivalently, adjusting the p values corresponding to intervals of fixed width). Thus, multilevel models address the multiple comparisons problem and also yield more efficient estimates, especially in settings with low group-level variation, which is where multiple comparisons are a particular concern.

Original languageEnglish (US)
Pages (from-to)189-211
Number of pages23
JournalJournal of Research on Educational Effectiveness
Issue number2
StatePublished - Apr 2012


  • Bayesian inference
  • Type S error
  • hierarchical modeling
  • multiple comparisons
  • statistical significance

ASJC Scopus subject areas

  • Education


Dive into the research topics of 'Why We (Usually) Don't Have to Worry About Multiple Comparisons'. Together they form a unique fingerprint.

Cite this