TY - GEN

T1 - Resolution limits of sparse coding in high dimensions

AU - Fletcher, Alyson K.

AU - Rangan, Sundeep

AU - Goyal, Vivek K.

PY - 2009

Y1 - 2009

N2 - This paper addresses the problem of sparsity pattern detection for unknown κ-sparse n-dimensional signals observed through m noisy, random linear measurements. Sparsity pattern recovery arises in a number of settings including statistical model selection, pattern detection, and image acquisition. The main results in this paper are necessary and sufficient conditions for asymptotically-reliable sparsity pattern recovery in terms of the dimensions m, n and k as well as the signal-tonoise ratio (SNR) and the minimum-to-average ratio (MAR) of the nonzero entries of the signal. We show that m > 2κ log(n - κ)/(SNR ?MAR) is necessary for any algorithm to succeed, regardless of complexity; this matches a previous sufficient condition for maximum likelihood estimation within a constant factor under certain scalings of κ, SNR and MAR with n. We also show a sufficient condition for a computationally-trivial thresholding algorithm that is larger than the previous expression by only a factor of 4(1+SNR) and larger than the requirement for lasso by only a factor of 4/MAR. This provides insight on the precise value and limitations of convex programming-based algorithms.

AB - This paper addresses the problem of sparsity pattern detection for unknown κ-sparse n-dimensional signals observed through m noisy, random linear measurements. Sparsity pattern recovery arises in a number of settings including statistical model selection, pattern detection, and image acquisition. The main results in this paper are necessary and sufficient conditions for asymptotically-reliable sparsity pattern recovery in terms of the dimensions m, n and k as well as the signal-tonoise ratio (SNR) and the minimum-to-average ratio (MAR) of the nonzero entries of the signal. We show that m > 2κ log(n - κ)/(SNR ?MAR) is necessary for any algorithm to succeed, regardless of complexity; this matches a previous sufficient condition for maximum likelihood estimation within a constant factor under certain scalings of κ, SNR and MAR with n. We also show a sufficient condition for a computationally-trivial thresholding algorithm that is larger than the previous expression by only a factor of 4(1+SNR) and larger than the requirement for lasso by only a factor of 4/MAR. This provides insight on the precise value and limitations of convex programming-based algorithms.

UR - http://www.scopus.com/inward/record.url?scp=84858790046&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84858790046&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84858790046

SN - 9781605609492

T3 - Advances in Neural Information Processing Systems 21 - Proceedings of the 2008 Conference

SP - 449

EP - 456

BT - Advances in Neural Information Processing Systems 21 - Proceedings of the 2008 Conference

PB - Neural Information Processing Systems

T2 - 22nd Annual Conference on Neural Information Processing Systems, NIPS 2008

Y2 - 8 December 2008 through 11 December 2008

ER -