TY - JOUR
T1 - Matrix inference and estimation in multi-layer modelsThis article is an updated version of
T2 - Pandit P, Sahraee Ardakan M, Rangan S, Schniter P and Fletcher A K 2020 Matrix inference and estimation in multi-layer models Advances in Neural Information Processing Systems vol 33 ed H Larochelle, M Ranzato, R Hadsell, M F Balcan and H Lin (New York: Curran Associates) pp 22456–67. Code available at https://github.com/parthe/ML-Mat-VAMP.
AU - Pandit, Parthe
AU - Sahraee-Ardakan, Mojtaba
AU - Rangan, Sundeep
AU - Schniter, Philip
AU - Fletcher, Alyson K.
N1 - Publisher Copyright:
© 2021 IOP Publishing Ltd and SISSA Medialab srl.
PY - 2021/12
Y1 - 2021/12
N2 - We consider the problem of estimating the input and hidden variables of a stochastic multi-layer neural network (NN) from an observation of the output. The hidden variables in each layer are represented as matrices with statistical interactions along both rows as well as columns. This problem applies to matrix imputation, signal recovery via deep generative prior models, multi-task and mixed regression, and learning certain classes of two-layer NNs. We extend a recently-developed algorithm—multi-layer vector approximate message passing, for this matrix-valued inference problem. It is shown that the performance of the proposed multi-layer matrix vector approximate message passing algorithm can be exactly predicted in a certain random large-system limit, where the dimensions N Ã d of the unknown quantities grow as N → ∞ with d fixed. In the two-layer neural-network learning problem, this scaling corresponds to the case where the number of input features as well as training samples grow to infinity but the number of hidden nodes stays fixed. The analysis enables a precise prediction of the parameter and test error of the learning.
AB - We consider the problem of estimating the input and hidden variables of a stochastic multi-layer neural network (NN) from an observation of the output. The hidden variables in each layer are represented as matrices with statistical interactions along both rows as well as columns. This problem applies to matrix imputation, signal recovery via deep generative prior models, multi-task and mixed regression, and learning certain classes of two-layer NNs. We extend a recently-developed algorithm—multi-layer vector approximate message passing, for this matrix-valued inference problem. It is shown that the performance of the proposed multi-layer matrix vector approximate message passing algorithm can be exactly predicted in a certain random large-system limit, where the dimensions N Ã d of the unknown quantities grow as N → ∞ with d fixed. In the two-layer neural-network learning problem, this scaling corresponds to the case where the number of input features as well as training samples grow to infinity but the number of hidden nodes stays fixed. The analysis enables a precise prediction of the parameter and test error of the learning.
UR - http://www.scopus.com/inward/record.url?scp=85122501344&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85122501344&partnerID=8YFLogxK
U2 - 10.1088/1742-5468/ac3a75
DO - 10.1088/1742-5468/ac3a75
M3 - Article
AN - SCOPUS:85122501344
SN - 1742-5468
VL - 2021
JO - Journal of Statistical Mechanics: Theory and Experiment
JF - Journal of Statistical Mechanics: Theory and Experiment
IS - 12
M1 - 124004
ER -