We consider the problem of learning high-dimensional discrete distributions and structured (e.g. Gaussian) distributions in distributed networks, where each node in the network observes an independent sample from the underlying distribution and can use k bits to communicate its sample to a central processor. We consider a blackboard communication model, where nodes can share information interactively through a public blackboard but each node is restricted to write at most k bits on the final transcript. We characterize the impact of the communication constraint k on the minimax risk of estimating the underlying distribution under ℓ2 loss, and develop minimax lower bounds that apply in a unified way to many common statistical models. This is achieved by explicitly characterizing the Fisher information from the blackboard transcript.