TY - GEN
T1 - Sharpening Sparse Regularizers
AU - Al-Shabili, Abdullah
AU - Selesnick, Ivan
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/5
Y1 - 2019/5
N2 - Non-convex penalties outperform the convex ℓ-norm, but generally sacrifice the cost function convexity. As a middle ground, we propose a framework to design non-convex penalties that induce sparsity more effectively than the ℓ-norm, but without sacrificing the cost function convexity. The non-smooth non-convex regularizers are constructed by subtracting from the non-smooth convex penalty its smoothed version. We propose a generalized infimal convolution smoothing smoothing technique to obtain the smoothed version. We call the proposed framework sharpening sparse regularizers (SSR) to imply its advantages compared to convex and non-convex regularizers. The SSR framework is applicable to any sparsity regularized ill-posed linear inverse problem. Furthermore, it recovers and generalizes several non-convex penalties in the literature as special cases. The SSR-RLS problem can be formulated as a saddle point problem, and solved by a scalable generalized primal-dual algorithm. The effectiveness of the SSR framework is demonstrated by numerical experiments.
AB - Non-convex penalties outperform the convex ℓ-norm, but generally sacrifice the cost function convexity. As a middle ground, we propose a framework to design non-convex penalties that induce sparsity more effectively than the ℓ-norm, but without sacrificing the cost function convexity. The non-smooth non-convex regularizers are constructed by subtracting from the non-smooth convex penalty its smoothed version. We propose a generalized infimal convolution smoothing smoothing technique to obtain the smoothed version. We call the proposed framework sharpening sparse regularizers (SSR) to imply its advantages compared to convex and non-convex regularizers. The SSR framework is applicable to any sparsity regularized ill-posed linear inverse problem. Furthermore, it recovers and generalizes several non-convex penalties in the literature as special cases. The SSR-RLS problem can be formulated as a saddle point problem, and solved by a scalable generalized primal-dual algorithm. The effectiveness of the SSR framework is demonstrated by numerical experiments.
KW - Sparsity
KW - convex analysis
KW - convex optimization
KW - non-convexity
KW - smoothing
UR - http://www.scopus.com/inward/record.url?scp=85068980805&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85068980805&partnerID=8YFLogxK
U2 - 10.1109/ICASSP.2019.8683039
DO - 10.1109/ICASSP.2019.8683039
M3 - Conference contribution
AN - SCOPUS:85068980805
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 4908
EP - 4912
BT - 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019
Y2 - 12 May 2019 through 17 May 2019
ER -