TY - JOUR
T1 - Inference for generalized linear models via alternating directions and bethe free energy minimization
AU - Rangan, Sundeep
AU - Fletcher, Alyson K.
AU - Schniter, Philip
AU - Kamilov, Ulugbek S.
N1 - Funding Information:
S. Rangan was supported by the National Science Foundation under Grant 1116589, Grant 1302336, Grant 1564142, and Grant 1547332. A. K. Fletcher was supported in part by the National Science Foundation under Grant 1254204 and in part by the Office of Naval Research under Grant N00014-15-1-2677. P. Schniter was supported by the National Science Foundation under Grant CCF-1218754, Grant CCF-1018368, and Grant CCF-1527162. U. S. Kamilov was supported by the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ERC under Grant 267439.
Publisher Copyright:
© 1963-2012 IEEE.
PY - 2017/1
Y1 - 2017/1
N2 - Generalized linear models, where a random vector x is observed through a noisy, possibly nonlinear, function of a linear transform z= A x, arise in a range of applications in nonlinear filtering and regression. Approximate message passing (AMP) methods, based on loopy belief propagation, are a promising class of approaches for approximate inference in these models. AMP methods are computationally simple, general, and admit precise analyses with testable conditions for optimality for large i.i.d. transforms A. However, the algorithms can diverge for general A. This paper presents a convergent approach to the generalized AMP (GAMP) algorithm based on direct minimization of a large-system limit approximation of the Bethe free energy (LSL-BFE). The proposed method uses a double-loop procedure, where the outer loop successively linearizes the LSL-BFE and the inner loop minimizes the linearized LSL-BFE using the alternating direction method of multipliers (ADMM). The proposed method, called ADMM-GAMP, is similar in structure to the original GAMP method, but with an additional least-squares minimization. It is shown that for strictly convex, smooth penalties, ADMM-GAMP is guaranteed to converge to a local minimum of the LSL-BFE, thus providing a convergent alternative to GAMP that is stable under arbitrary transforms. Simulations are also presented that demonstrate the robustness of the method for non-convex penalties as well.
AB - Generalized linear models, where a random vector x is observed through a noisy, possibly nonlinear, function of a linear transform z= A x, arise in a range of applications in nonlinear filtering and regression. Approximate message passing (AMP) methods, based on loopy belief propagation, are a promising class of approaches for approximate inference in these models. AMP methods are computationally simple, general, and admit precise analyses with testable conditions for optimality for large i.i.d. transforms A. However, the algorithms can diverge for general A. This paper presents a convergent approach to the generalized AMP (GAMP) algorithm based on direct minimization of a large-system limit approximation of the Bethe free energy (LSL-BFE). The proposed method uses a double-loop procedure, where the outer loop successively linearizes the LSL-BFE and the inner loop minimizes the linearized LSL-BFE using the alternating direction method of multipliers (ADMM). The proposed method, called ADMM-GAMP, is similar in structure to the original GAMP method, but with an additional least-squares minimization. It is shown that for strictly convex, smooth penalties, ADMM-GAMP is guaranteed to converge to a local minimum of the LSL-BFE, thus providing a convergent alternative to GAMP that is stable under arbitrary transforms. Simulations are also presented that demonstrate the robustness of the method for non-convex penalties as well.
KW - ADMM
KW - Belief propagation
KW - generalized linear models
KW - message passing
KW - variational optimization
UR - http://www.scopus.com/inward/record.url?scp=85008450538&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85008450538&partnerID=8YFLogxK
U2 - 10.1109/TIT.2016.2619373
DO - 10.1109/TIT.2016.2619373
M3 - Article
AN - SCOPUS:85008450538
VL - 63
SP - 676
EP - 697
JO - IRE Professional Group on Information Theory
JF - IRE Professional Group on Information Theory
SN - 0018-9448
IS - 1
M1 - 7600459
ER -