TY - JOUR
T1 - Sharpening Sparse Regularizers via Smoothing
AU - Al-Shabili, Abdullah H.
AU - Feng, Yining
AU - Selesnick, Ivan
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2021
Y1 - 2021
N2 - Non-convex sparsity-inducing penalties outperform their convex counterparts, but generally sacrifice the cost function convexity. As a middle ground, we propose the sharpening sparse regularizers (SSR) framework to design non-separable non-convex penalties that induce sparsity more effectively than convex penalties such as $\ell _1$ and nuclear norms, but without sacrificing the cost function convexity. The overall problem convexity is preserved by exploiting the data fidelity relative strong convexity. The framework constructs penalties as the difference of convex functions, namely the difference between convex sparsity-inducing penalties and their smoothed versions. We propose a generalized infimal convolution smoothing technique to obtain the smoothed versions. Furthermore, SSR recovers and generalizes several non-convex penalties in the literature as special cases. The SSR framework is applicable to any sparsity regularized least squares ill-posed linear inverse problem. Beyond regularized least squares, the SSR framework can be extended to accommodate Bregman divergence, and other sparsity structures such as low-rankness. The SSR optimization problem can be formulated as a saddle point problem, and solved by a scalable forward-backward splitting algorithm. The effectiveness of the SSR framework is demonstrated by numerical experiments in different applications.
AB - Non-convex sparsity-inducing penalties outperform their convex counterparts, but generally sacrifice the cost function convexity. As a middle ground, we propose the sharpening sparse regularizers (SSR) framework to design non-separable non-convex penalties that induce sparsity more effectively than convex penalties such as $\ell _1$ and nuclear norms, but without sacrificing the cost function convexity. The overall problem convexity is preserved by exploiting the data fidelity relative strong convexity. The framework constructs penalties as the difference of convex functions, namely the difference between convex sparsity-inducing penalties and their smoothed versions. We propose a generalized infimal convolution smoothing technique to obtain the smoothed versions. Furthermore, SSR recovers and generalizes several non-convex penalties in the literature as special cases. The SSR framework is applicable to any sparsity regularized least squares ill-posed linear inverse problem. Beyond regularized least squares, the SSR framework can be extended to accommodate Bregman divergence, and other sparsity structures such as low-rankness. The SSR optimization problem can be formulated as a saddle point problem, and solved by a scalable forward-backward splitting algorithm. The effectiveness of the SSR framework is demonstrated by numerical experiments in different applications.
KW - Sparsity
KW - convex analysis
KW - convex optimization
KW - low-rank
KW - non-convexity
KW - smoothing
UR - http://www.scopus.com/inward/record.url?scp=85126647047&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85126647047&partnerID=8YFLogxK
U2 - 10.1109/OJSP.2021.3104497
DO - 10.1109/OJSP.2021.3104497
M3 - Article
SN - 2644-1322
VL - 2
SP - 396
EP - 409
JO - IEEE Open Journal of Signal Processing
JF - IEEE Open Journal of Signal Processing
M1 - 9512409
ER -