TY - GEN
T1 - Inference in Deep Networks in High Dimensions
AU - Fletcher, Alyson K.
AU - Rangan, Sundeep
AU - Schniter, Philip
N1 - Funding Information:
A. K. Fletcher is supported in part by National Science Foundation grants 1254204 and 1564278 as well as the Office of Naval Research grant N00014- 15-1-2677. S. Rangan is supported in part by the National Science Foundation under Grants 1302336, 1564142, and 1547332. P. Schniter is supported in part by the National Science Foundation grant CCF-1527162.
Funding Information:
A. K. Fletcher is supported in part by National Science Foundation grants 1254204 and 1564278 as well as the Office of Naval Research grant N00014-15-1-2677. S. Rangan is supported in part by the National Science Foundation under Grants 1302336, 1564142, and 1547332. P. Schniter is supported in part by the National Science Foundation grant CCF-1527162.
Publisher Copyright:
© 2018 IEEE.
PY - 2018/8/15
Y1 - 2018/8/15
N2 - Deep generative networks provide a powerful tool for modeling complex data in a wide range of applications. In inverse problems that use these networks as generative priors on data, one must often perform inference of the inputs of the networks from the outputs. Inference is also required for sampling during stochastic training of these generative models. This paper considers inference in a deep stochastic neural network where the parameters (e.g., weights, biases and activation functions) are known and the problem is to estimate the values of the input and hidden units from the output. A novel and computationally tractable inference method called Multi-Layer Vector Approximate Message Passing (ML-VAMP) is presented. Our main contribution shows that the mean-squared error (MSE) of ML-VAMP can be exactly predicted in a certain large system limit. In addition, the MSE achieved by ML-VAMP matches the Bayes optimal value recently postulated by Reeves when certain fixed point equations have unique solutions.
AB - Deep generative networks provide a powerful tool for modeling complex data in a wide range of applications. In inverse problems that use these networks as generative priors on data, one must often perform inference of the inputs of the networks from the outputs. Inference is also required for sampling during stochastic training of these generative models. This paper considers inference in a deep stochastic neural network where the parameters (e.g., weights, biases and activation functions) are known and the problem is to estimate the values of the input and hidden units from the output. A novel and computationally tractable inference method called Multi-Layer Vector Approximate Message Passing (ML-VAMP) is presented. Our main contribution shows that the mean-squared error (MSE) of ML-VAMP can be exactly predicted in a certain large system limit. In addition, the MSE achieved by ML-VAMP matches the Bayes optimal value recently postulated by Reeves when certain fixed point equations have unique solutions.
UR - http://www.scopus.com/inward/record.url?scp=85052473533&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85052473533&partnerID=8YFLogxK
U2 - 10.1109/ISIT.2018.8437792
DO - 10.1109/ISIT.2018.8437792
M3 - Conference contribution
AN - SCOPUS:85052473533
SN - 9781538647806
T3 - IEEE International Symposium on Information Theory - Proceedings
SP - 1884
EP - 1888
BT - 2018 IEEE International Symposium on Information Theory, ISIT 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE International Symposium on Information Theory, ISIT 2018
Y2 - 17 June 2018 through 22 June 2018
ER -