Non-convex penalties outperform the convex ℓ-norm, but generally sacrifice the cost function convexity. As a middle ground, we propose a framework to design non-convex penalties that induce sparsity more effectively than the ℓ-norm, but without sacrificing the cost function convexity. The non-smooth non-convex regularizers are constructed by subtracting from the non-smooth convex penalty its smoothed version. We propose a generalized infimal convolution smoothing smoothing technique to obtain the smoothed version. We call the proposed framework sharpening sparse regularizers (SSR) to imply its advantages compared to convex and non-convex regularizers. The SSR framework is applicable to any sparsity regularized ill-posed linear inverse problem. Furthermore, it recovers and generalizes several non-convex penalties in the literature as special cases. The SSR-RLS problem can be formulated as a saddle point problem, and solved by a scalable generalized primal-dual algorithm. The effectiveness of the SSR framework is demonstrated by numerical experiments.