Abstract
Deep generative models have emerged as a powerful class of priors for signals in various inverse problems such as compressed sensing, phase retrieval and super-resolution. In this work, we consider the compressed sensing problem and assume the unknown signal to lie in the range of some pre-trained generative model. A popular approach for signal recovery is via gradient descent in the low-dimensional latent space. While gradient descent has achieved good empirical performance, its theoretical behavior is not well understood. We introduce the use of stochastic gradient Langevin dynamics (SGLD) for compressed sensing with a generative prior. Under mild assumptions on the generative model, we prove the convergence of SGLD to the true signal. We also demonstrate competitive empirical performance to standard gradient descent.
Original language | English (US) |
---|---|
Pages (from-to) | 7410-7422 |
Number of pages | 13 |
Journal | IEEE Transactions on Information Theory |
Volume | 68 |
Issue number | 11 |
DOIs | |
State | Published - Nov 1 2022 |
Keywords
- Compressed sensing
- Langevin dynamics
- generative models
ASJC Scopus subject areas
- Information Systems
- Computer Science Applications
- Library and Information Sciences