Representer Theorems for Sparsity-Promoting ℓ1 Regularization
M. Unser, J. Fageot, H. Gupta
IEEE Transactions on Information Theory, vol. 62, no. 9, pp. 5167–5180, September 2016.
We present a theoretical analysis and comparison of the effect of ℓ1 versus ℓ2 regularization for the resolution of ill-posed linear inverse and/or compressed sensing problems. Our formulation covers the most general setting where the solution is specified as the minimizer of a convex cost functional. We derive a series of representer theorems that give the generic form of the solution depending on the type of regularization. We start with the analysis of the problem in finite dimensions and then extend our results to the infinite-dimensional spaces ℓ2(ℤ) and ℓ1(ℤ). We also consider the use of linear transformations in the form of dictionaries or regularization operators. In particular, we show that the ℓ2 solution is forced to live in a predefined subspace that is intrinsically smooth and tied to the measurement operator. The ℓ1 solution, on the other hand, is formed by adaptively selecting a subset of atoms in a dictionary that is specified by the regularization operator. Beside the proof that ℓ1 solutions are intrinsically sparse, the main outcome of our investigation is that the use of ℓ1 regularization is much more favorable for injecting prior knowledge: it results in a functional form that is independent of the system matrix, while this is not so in the ℓ2 scenario.
@ARTICLE(http://bigwww.epfl.ch/publications/unser1602.html, AUTHOR="Unser, M. and Fageot, J. and Gupta, H.", TITLE="Representer Theorems for Sparsity-Promoting $\ell_{1}$ Regularization", JOURNAL="{IEEE} Transactions on Information Theory", YEAR="2016", volume="62", number="9", pages="5167--5180", month="September", note="")