TY - JOUR
T1 - AMP-Inspired Deep Networks for Sparse Linear Inverse Problems
AU - Borgerding, Mark
AU - Schniter, Philip
AU - Rangan, Sundeep
N1 - Funding Information:
Manuscript received December 5, 2016; revised April 15, 2017; accepted May 7, 2017. Date of publication May 25, 2017; date of current version June 21, 2017. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Sotirios Chatzis. The work of M. Borgerd-ing and P. Schniter was supported in part by the National Science Foundation under Grant 1527162 and Grant 1539960. The work of S. Rangan was supported by the National Science Foundation under Grant 1302336, Grant 1547332, and Grant 1564142. This paper was presented in part at the 2016 IEEE Global Conference on Signal and Information Processing, Washington, DC, USA, Dec. 2016. (Corresponding author: Philip Schniter.) M. Borgerding and P. Schniter are with the Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210 USA (e-mail: borgerding.7@osu.edu; schniter.1@osu.edu).
Publisher Copyright:
© 2017 IEEE.
PY - 2017/8/15
Y1 - 2017/8/15
N2 - Deep learning has gained great popularity due to its widespread success on many inference problems. We consider the application of deep learning to the sparse linear inverse problem, where one seeks to recover a sparse signal from a few noisy linear measurements. In this paper, we propose two novel neural-network architectures that decouple prediction errors across layers in the same way that the approximate message passing (AMP) algorithms decouple them across iterations: through Onsager correction. First, we propose a "learned AMP" network that significantly improves upon Gregor and LeCun"s "learned ISTA." Second, inspired by the recently proposed "vector AMP" (VAMP) algorithm, we propose a "learned VAMP" network that offers increased robustness to deviations in the measurement matrix from i.i.d. Gaussian. In both cases, we jointly learn the linear transforms and scalar nonlinearities of the network. Interestingly, with i.i.d. signals, the linear transforms and scalar nonlinearities prescribed by the VAMP algorithm coincide with the values learned through back-propagation, leading to an intuitive interpretation of learned VAMP. Finally, we apply our methods to two problems from 5G wireless communications: compressive random access and massive-MIMO channel estimation.
AB - Deep learning has gained great popularity due to its widespread success on many inference problems. We consider the application of deep learning to the sparse linear inverse problem, where one seeks to recover a sparse signal from a few noisy linear measurements. In this paper, we propose two novel neural-network architectures that decouple prediction errors across layers in the same way that the approximate message passing (AMP) algorithms decouple them across iterations: through Onsager correction. First, we propose a "learned AMP" network that significantly improves upon Gregor and LeCun"s "learned ISTA." Second, inspired by the recently proposed "vector AMP" (VAMP) algorithm, we propose a "learned VAMP" network that offers increased robustness to deviations in the measurement matrix from i.i.d. Gaussian. In both cases, we jointly learn the linear transforms and scalar nonlinearities of the network. Interestingly, with i.i.d. signals, the linear transforms and scalar nonlinearities prescribed by the VAMP algorithm coincide with the values learned through back-propagation, leading to an intuitive interpretation of learned VAMP. Finally, we apply our methods to two problems from 5G wireless communications: compressive random access and massive-MIMO channel estimation.
KW - Deep learning
KW - approximate message passing
KW - compressive sensing
KW - massive MIMO
KW - random access
UR - http://www.scopus.com/inward/record.url?scp=85028375200&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85028375200&partnerID=8YFLogxK
U2 - 10.1109/TSP.2017.2708040
DO - 10.1109/TSP.2017.2708040
M3 - Article
AN - SCOPUS:85028375200
VL - 65
SP - 4293
EP - 4308
JO - IRE Transactions on Audio
JF - IRE Transactions on Audio
SN - 1053-587X
IS - 16
M1 - 7934066
ER -