Abstract
The standard linear regression (SLR) problem is to recover a vector \mathrm {x}^{0} from noisy linear observations \mathrm {y}=\mathrm {Ax}^{0}+\mathrm {w}. The approximate message passing (AMP) algorithm proposed by Donoho, Maleki, and Montanari is a computationally efficient iterative approach to SLR that has a remarkable property: for large i.i.d. sub-Gaussian matrices A, its per-iteration behavior is rigorously characterized by a scalar state-evolution whose fixed points, when unique, are Bayes optimal. The AMP algorithm, however, is fragile in that even small deviations from the i.i.d. sub-Gaussian model can cause the algorithm to diverge. This paper considers a 'vector AMP' (VAMP) algorithm and shows that VAMP has a rigorous scalar state-evolution that holds under a much broader class of large random matrices A: those that are right-orthogonally invariant. After performing an initial singular value decomposition (SVD) of A, the per-iteration complexity of VAMP is similar to that of AMP. In addition, the fixed points of VAMP's state evolution are consistent with the replica prediction of the minimum mean-squared error derived by Tulino, Caire, Verdú, and Shamai. Numerical experiments are used to confirm the effectiveness of VAMP and its consistency with state-evolution predictions.
Original language | English (US) |
---|---|
Article number | 8713501 |
Pages (from-to) | 6664-6684 |
Number of pages | 21 |
Journal | IEEE Transactions on Information Theory |
Volume | 65 |
Issue number | 10 |
DOIs | |
State | Published - Oct 2019 |
Keywords
- Belief propagation
- compressive sensing
- inference algorithms
- message passing
- random matrices
ASJC Scopus subject areas
- Information Systems
- Computer Science Applications
- Library and Information Sciences