TY - CONF
T1 - Sparse multivariate Bernoulli processes in high dimensions
AU - Pandit, Parthe
AU - Sahraee, Mojtaba
AU - Amini, Arash A.
AU - Rangan, Sundeep
AU - Fletcher, Alyson K.
N1 - Funding Information:
A.K. Fletcher were supported in part by the National Science Foundation under Grants 1738285 and 1738286 and the Office of Naval Research under Grant N00014-15-1-2677. S. Rangan was supported in part by the National Science Foundation under Grants 1116589, 1302336, and 1547332, and the industrial affiliates of NYU WIRELESS.
Publisher Copyright:
© 2019 by the author(s).
PY - 2020
Y1 - 2020
N2 - We consider the problem of estimating the parameters of a multivariate Bernoulli process with auto-regressive feedback in the high-dimensional setting where the number of samples available is much less than the number of parameters. This problem arises in learning interconnections of networks of dynamical systems with spiking or binary valued data. We also allow the process to depend on its past up to a lag p, for a general p ≥ 1, allowing for more realistic modeling in many applications. We propose and analyze an `1-regularized maximum likelihood (ML) estimator under the assumption that the parameter tensor is approximately sparse. Rigorous analysis of such estimators is made challenging by the dependent and non-Gaussian nature of the process as well as the presence of the nonlinearities and multi-level feedback. We derive precise upper bounds on the mean-squared estimation error in terms of the number of samples, dimensions of the process, the lag p and other key statistical properties of the model. The ideas presented can be used in the rigorous high-dimensional analysis of regularized M-estimators for other sparse nonlinear and non-Gaussian processes with long-range dependence.
AB - We consider the problem of estimating the parameters of a multivariate Bernoulli process with auto-regressive feedback in the high-dimensional setting where the number of samples available is much less than the number of parameters. This problem arises in learning interconnections of networks of dynamical systems with spiking or binary valued data. We also allow the process to depend on its past up to a lag p, for a general p ≥ 1, allowing for more realistic modeling in many applications. We propose and analyze an `1-regularized maximum likelihood (ML) estimator under the assumption that the parameter tensor is approximately sparse. Rigorous analysis of such estimators is made challenging by the dependent and non-Gaussian nature of the process as well as the presence of the nonlinearities and multi-level feedback. We derive precise upper bounds on the mean-squared estimation error in terms of the number of samples, dimensions of the process, the lag p and other key statistical properties of the model. The ideas presented can be used in the rigorous high-dimensional analysis of regularized M-estimators for other sparse nonlinear and non-Gaussian processes with long-range dependence.
UR - http://www.scopus.com/inward/record.url?scp=85085034417&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85085034417&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85085034417
T2 - 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019
Y2 - 16 April 2019 through 18 April 2019
ER -