Understanding the Bethe approximation: When and how can it go wrong?

Adrian Weller, Kui Tang, David Sontag, Tony Jebara

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Belief propagation is a remarkably effective tool for inference, even when applied to networks with cycles. It may be viewed as a way to seek the minimum of the Bethe free energy, though with no convergence guarantee in general. A variational perspective shows that, compared to exact inference, this minimization employs two forms of approximation: (i) the true entropy is approximated by the Bethe entropy, and (ii) the minimization is performed over a relaxation of the marginal polytope termed the local polytope. Here we explore when and how the Bethe approximation can fail for binary pairwise models by examining each aspect of the approximation, deriving results both analytically and with new experimental methods.

Original languageEnglish (US)
Title of host publicationUncertainty in Artificial Intelligence - Proceedings of the 30th Conference, UAI 2014
EditorsNevin L. Zhang, Jin Tian
PublisherAUAI Press
Pages868-877
Number of pages10
ISBN (Electronic)9780974903910
StatePublished - 2014
Event30th Conference on Uncertainty in Artificial Intelligence, UAI 2014 - Quebec City, Canada
Duration: Jul 23 2014Jul 27 2014

Publication series

NameUncertainty in Artificial Intelligence - Proceedings of the 30th Conference, UAI 2014

Other

Other30th Conference on Uncertainty in Artificial Intelligence, UAI 2014
Country/TerritoryCanada
CityQuebec City
Period7/23/147/27/14

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Understanding the Bethe approximation: When and how can it go wrong?'. Together they form a unique fingerprint.

Cite this