TY - JOUR

T1 - Vector Approximate Message Passing

AU - Rangan, Sundeep

AU - Schniter, Philip

AU - Fletcher, Alyson K.

N1 - Funding Information:
Manuscript received October 11, 2016; revised June 10, 2018; accepted May 1, 2019. Date of publication May 13, 2019; date of current version September 13, 2019. This work was supported in part by the National Science Foundation under Grant 1302336, Grant 1564142, and Grant 1547332, in part by the National Science Foundation under Grant 1527162 and Grant 1716388, in part by the National Science Foundation under Grant 1254204 and Grant 1564278, and in part by the Office of Naval Research under Grant N00014-15-1-2677. This work was presented in part at the 2017 IEEE Symposium on Information Theory.
Publisher Copyright:
© 1963-2012 IEEE.

PY - 2019/10

Y1 - 2019/10

N2 - The standard linear regression (SLR) problem is to recover a vector \mathrm {x}^{0} from noisy linear observations \mathrm {y}=\mathrm {Ax}^{0}+\mathrm {w}. The approximate message passing (AMP) algorithm proposed by Donoho, Maleki, and Montanari is a computationally efficient iterative approach to SLR that has a remarkable property: for large i.i.d. sub-Gaussian matrices A, its per-iteration behavior is rigorously characterized by a scalar state-evolution whose fixed points, when unique, are Bayes optimal. The AMP algorithm, however, is fragile in that even small deviations from the i.i.d. sub-Gaussian model can cause the algorithm to diverge. This paper considers a 'vector AMP' (VAMP) algorithm and shows that VAMP has a rigorous scalar state-evolution that holds under a much broader class of large random matrices A: those that are right-orthogonally invariant. After performing an initial singular value decomposition (SVD) of A, the per-iteration complexity of VAMP is similar to that of AMP. In addition, the fixed points of VAMP's state evolution are consistent with the replica prediction of the minimum mean-squared error derived by Tulino, Caire, Verdú, and Shamai. Numerical experiments are used to confirm the effectiveness of VAMP and its consistency with state-evolution predictions.

AB - The standard linear regression (SLR) problem is to recover a vector \mathrm {x}^{0} from noisy linear observations \mathrm {y}=\mathrm {Ax}^{0}+\mathrm {w}. The approximate message passing (AMP) algorithm proposed by Donoho, Maleki, and Montanari is a computationally efficient iterative approach to SLR that has a remarkable property: for large i.i.d. sub-Gaussian matrices A, its per-iteration behavior is rigorously characterized by a scalar state-evolution whose fixed points, when unique, are Bayes optimal. The AMP algorithm, however, is fragile in that even small deviations from the i.i.d. sub-Gaussian model can cause the algorithm to diverge. This paper considers a 'vector AMP' (VAMP) algorithm and shows that VAMP has a rigorous scalar state-evolution that holds under a much broader class of large random matrices A: those that are right-orthogonally invariant. After performing an initial singular value decomposition (SVD) of A, the per-iteration complexity of VAMP is similar to that of AMP. In addition, the fixed points of VAMP's state evolution are consistent with the replica prediction of the minimum mean-squared error derived by Tulino, Caire, Verdú, and Shamai. Numerical experiments are used to confirm the effectiveness of VAMP and its consistency with state-evolution predictions.

KW - Belief propagation

KW - compressive sensing

KW - inference algorithms

KW - message passing

KW - random matrices

UR - http://www.scopus.com/inward/record.url?scp=85074193680&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85074193680&partnerID=8YFLogxK

U2 - 10.1109/TIT.2019.2916359

DO - 10.1109/TIT.2019.2916359

M3 - Article

AN - SCOPUS:85074193680

VL - 65

SP - 6664

EP - 6684

JO - IRE Professional Group on Information Theory

JF - IRE Professional Group on Information Theory

SN - 0018-9448

IS - 10

M1 - 8713501

ER -