TY - GEN
T1 - Vector approximate message passing
AU - Rangan, Sundeep
AU - Schniter, Philip
AU - Fletcher, Alyson K.
N1 - Funding Information:
S. Rangan is supported in part by the National Science Foundation under Grants 1302336, 1564142, and 1547332. P. Schniter is supported in part by the National Science Foundation grant CCF-1527162. A. K. Fletcher is supported in part by National Science Foundation grants 1254204 and 1564278 as well as the Office of Naval Research grant N00014-15-1-2677.
Publisher Copyright:
© 2017 IEEE.
PY - 2017/8/9
Y1 - 2017/8/9
N2 - The standard linear regression (SLR) problem is to recover a vector x0 from noisy linear observations y = Ax0 + w. The approximate message passing (AMP) algorithm recently proposed by Donoho, Maleki, and Montanari is a computationally efficient iterative approach to SLR that has a remarkable property: for large i.i.d. sub-Gaussian matrices A, its periteration behavior is rigorously characterized by a scalar stateevolution whose fixed points, when unique, are Bayes optimal. AMP, however, is fragile in that even small deviations from the i.i.d. sub-Gaussian model can cause the algorithm to diverge. This paper considers a 'vector AMP' (VAMP) algorithm and shows that VAMP has a rigorous scalar state-evolution that holds under a much broader class of large random matrices A: those that are right-rotationally invariant. After performing an initial singular value decomposition (SVD) of A, the per-iteration complexity of VAMP is similar to that of AMP. In addition, the fixed points of VAMP's state evolution are consistent with the replica prediction of the minimum mean-squared error recently derived by Tulino, Caire, Verdú, and Shamai.
AB - The standard linear regression (SLR) problem is to recover a vector x0 from noisy linear observations y = Ax0 + w. The approximate message passing (AMP) algorithm recently proposed by Donoho, Maleki, and Montanari is a computationally efficient iterative approach to SLR that has a remarkable property: for large i.i.d. sub-Gaussian matrices A, its periteration behavior is rigorously characterized by a scalar stateevolution whose fixed points, when unique, are Bayes optimal. AMP, however, is fragile in that even small deviations from the i.i.d. sub-Gaussian model can cause the algorithm to diverge. This paper considers a 'vector AMP' (VAMP) algorithm and shows that VAMP has a rigorous scalar state-evolution that holds under a much broader class of large random matrices A: those that are right-rotationally invariant. After performing an initial singular value decomposition (SVD) of A, the per-iteration complexity of VAMP is similar to that of AMP. In addition, the fixed points of VAMP's state evolution are consistent with the replica prediction of the minimum mean-squared error recently derived by Tulino, Caire, Verdú, and Shamai.
KW - Belief propagation
KW - Compressive sensing
KW - Inference algorithms
KW - Message passing
KW - Random matrices
UR - http://www.scopus.com/inward/record.url?scp=85034036614&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85034036614&partnerID=8YFLogxK
U2 - 10.1109/ISIT.2017.8006797
DO - 10.1109/ISIT.2017.8006797
M3 - Conference contribution
AN - SCOPUS:85034036614
T3 - IEEE International Symposium on Information Theory - Proceedings
SP - 1588
EP - 1592
BT - 2017 IEEE International Symposium on Information Theory, ISIT 2017
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2017 IEEE International Symposium on Information Theory, ISIT 2017
Y2 - 25 June 2017 through 30 June 2017
ER -