TY - JOUR
T1 - Sample-efficient algorithms for recovering structured signals from magnitude-only measurements
AU - Jagatap, Gauri
AU - Hegde, Chinmay
N1 - Funding Information:
Manuscript received November 26, 2017; revised August 8, 2018; accepted January 22, 2019. Date of publication March 5, 2019; date of current version June 14, 2019. This work was supported in part by the National Science Foundation under the Grant CCF-1566281 and Grant CCF-1750920. This paper was presented at the 2017 Annual Conference on Neural Information Processing Systems (NIPS) [1]; this version has an expanded set of theoretical results as well as numerical experiments.
Publisher Copyright:
© 1963-2012 IEEE.
PY - 2019/7
Y1 - 2019/7
N2 - We consider the problem of recovering a signal mathbf in mathbb , from magnitude-only measurements, y- i} = left |left langlemathbfa}- i}}, mathbf }right rangle }right | for i= 1,2,ldots ,m}. This is a stylized version of the classical phase retrieval problem, and is a fundamental challenge in nano- and bio-imaging systems, astronomical imaging, and speech processing. It is well-known that the above problem is ill-posed, and therefore some additional assumptions on the signal and/or the measurements are necessary. In this paper, we consider the case where the underlying signal mathbf is s -sparse. For this case, we develop a novel recovery algorithm that we call Compressive Phase Retrieval with Alternating Minimization, or CoPRAM. Our algorithm is simple and is obtained via a natural combination of the classical alternating minimization approach for phase retrieval with the CoSaMP algorithm for sparse recovery. Despite its simplicity, we prove that our algorithm achieves a sample complexity of mathcalO}left (s 2} log n}}right ) with Gaussian measurements mathbfa}- i}} , which matches the best known existing results; moreover, it also demonstrates linear convergence in theory and practice. An appealing feature of our algorithm is that it requires no extra tuning parameters other than the signal sparsity level s. Moreover, we show that our algorithm is robust to noise. The quadratic dependence of sample complexity on the sparsity level is sub-optimal, and we demonstrate how to alleviate this via additional assumptions beyond sparsity. First, we study the (practically) relevant case where the sorted coefficients of the underlying sparse signal exhibit a power law decay. In this scenario, we show that the CoPRAM algorithm achieves a sample complexity of mathcalO}left (s log n}}right ) , which is close to the information-theoretic limit. We then consider the case where the underlying signal mathbf arises from structured sparsity models. We specifically examine the case of block-sparse signals with uniform block size of b and block sparsity k=s/b. For this problem, we design a recovery algorithm that we call Block CoPRAM that further reduces the sample complexity to mathcalO}left (ks log n}}right ). For sufficiently large block lengths of b=Theta (s) , this bound equates to mathcalO}left (s log n}}right ). To our knowledge, our approach constitutes the first family of linearly convergent algorithms for signal recovery from magnitude-only Gaussian measurements that exhibit a sub-quadratic dependence on the signal sparsity level.
AB - We consider the problem of recovering a signal mathbf in mathbb , from magnitude-only measurements, y- i} = left |left langlemathbfa}- i}}, mathbf }right rangle }right | for i= 1,2,ldots ,m}. This is a stylized version of the classical phase retrieval problem, and is a fundamental challenge in nano- and bio-imaging systems, astronomical imaging, and speech processing. It is well-known that the above problem is ill-posed, and therefore some additional assumptions on the signal and/or the measurements are necessary. In this paper, we consider the case where the underlying signal mathbf is s -sparse. For this case, we develop a novel recovery algorithm that we call Compressive Phase Retrieval with Alternating Minimization, or CoPRAM. Our algorithm is simple and is obtained via a natural combination of the classical alternating minimization approach for phase retrieval with the CoSaMP algorithm for sparse recovery. Despite its simplicity, we prove that our algorithm achieves a sample complexity of mathcalO}left (s 2} log n}}right ) with Gaussian measurements mathbfa}- i}} , which matches the best known existing results; moreover, it also demonstrates linear convergence in theory and practice. An appealing feature of our algorithm is that it requires no extra tuning parameters other than the signal sparsity level s. Moreover, we show that our algorithm is robust to noise. The quadratic dependence of sample complexity on the sparsity level is sub-optimal, and we demonstrate how to alleviate this via additional assumptions beyond sparsity. First, we study the (practically) relevant case where the sorted coefficients of the underlying sparse signal exhibit a power law decay. In this scenario, we show that the CoPRAM algorithm achieves a sample complexity of mathcalO}left (s log n}}right ) , which is close to the information-theoretic limit. We then consider the case where the underlying signal mathbf arises from structured sparsity models. We specifically examine the case of block-sparse signals with uniform block size of b and block sparsity k=s/b. For this problem, we design a recovery algorithm that we call Block CoPRAM that further reduces the sample complexity to mathcalO}left (ks log n}}right ). For sufficiently large block lengths of b=Theta (s) , this bound equates to mathcalO}left (s log n}}right ). To our knowledge, our approach constitutes the first family of linearly convergent algorithms for signal recovery from magnitude-only Gaussian measurements that exhibit a sub-quadratic dependence on the signal sparsity level.
KW - Phase retrieval
KW - alternating minimization
KW - block-sparsity
KW - non-convex optimization
KW - sparsity
KW - structured sparsity
UR - http://www.scopus.com/inward/record.url?scp=85067618233&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85067618233&partnerID=8YFLogxK
U2 - 10.1109/TIT.2019.2902924
DO - 10.1109/TIT.2019.2902924
M3 - Article
AN - SCOPUS:85067618233
SN - 0018-9448
VL - 65
SP - 4434
EP - 4456
JO - IRE Professional Group on Information Theory
JF - IRE Professional Group on Information Theory
IS - 7
M1 - 8660586
ER -