TY - JOUR
T1 - Lower bounds for learning distributions under communication constraints via fisher information
AU - Barnes, Leighton Pate
AU - Han, Yanjun
AU - Özgür, Ayfer
N1 - Publisher Copyright:
© 2020 Leighton Pate Barnes, Yanjun Han, Ayfer Özgür. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v21/19-737.html.
PY - 2020/10
Y1 - 2020/10
N2 - We consider the problem of learning high-dimensional, nonparametric and structured (e.g., Gaussian) distributions in distributed networks, where each node in the network observes an independent sample from the underlying distribution and can use k bits to communicate its sample to a central processor. We consider three different models for communication. Under the independent model, each node communicates its sample to a central processor by independently encoding it into k bits. Under the more general sequential or blackboard communication models, nodes can share information interactively but each node is restricted to write at most k bits on the final transcript. We characterize the impact of the communication constraint k on the minimax risk of estimating the underlying distribution under `2 loss. We develop minimax lower bounds that apply in a unified way to many common statistical models and reveal that the impact of the communication constraint can be qualitatively different depending on the tail behavior of the score function associated with each model. A key ingredient in our proofs is a geometric characterization of Fisher information from quantized samples.
AB - We consider the problem of learning high-dimensional, nonparametric and structured (e.g., Gaussian) distributions in distributed networks, where each node in the network observes an independent sample from the underlying distribution and can use k bits to communicate its sample to a central processor. We consider three different models for communication. Under the independent model, each node communicates its sample to a central processor by independently encoding it into k bits. Under the more general sequential or blackboard communication models, nodes can share information interactively but each node is restricted to write at most k bits on the final transcript. We characterize the impact of the communication constraint k on the minimax risk of estimating the underlying distribution under `2 loss. We develop minimax lower bounds that apply in a unified way to many common statistical models and reveal that the impact of the communication constraint can be qualitatively different depending on the tail behavior of the score function associated with each model. A key ingredient in our proofs is a geometric characterization of Fisher information from quantized samples.
KW - Communication constraints
KW - Fisher information
KW - Learning distributions
KW - Statistical estimation
UR - http://www.scopus.com/inward/record.url?scp=85098456781&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85098456781&partnerID=8YFLogxK
M3 - Article
AN - SCOPUS:85098456781
SN - 1532-4435
VL - 21
JO - Journal of Machine Learning Research
JF - Journal of Machine Learning Research
ER -