Sparsity and Infinite Divisibility
A. Amini, M. Unser
IEEE Transactions on Information Theory, vol. 60, no. 4, pp. 2346–2358, April 2014.
We adopt an innovation-driven framework and investigate the sparse/compressible distributions obtained by linearly measuring or expanding continuous-domain stochastic models. Starting from the first principles, we show that all such distributions are necessarily infinitely divisible. This property is satisfied by many distributions used in statistical learning, such as Gaussian, Laplace, and a wide range of fat-tailed distributions, such as student's-t and α-stable laws. However, it excludes some popular distributions used in compressed sensing, such as the Bernoulli-Gaussian distribution and distributions, that decay like exp(−𝒪(|x|p)) for 1 < p < 2. We further explore the implications of infinite divisibility on distributions and conclude that tail decay and unimodality are preserved by all linear functionals of the same continuous-domain process. We explain how these results help in distinguishing suitable variational techniques for statistically solving inverse problems like denoising.
@ARTICLE(http://bigwww.epfl.ch/publications/amini1401.html, AUTHOR="Amini, A. and Unser, M.", TITLE="Sparsity and Infinite Divisibility", JOURNAL="{IEEE} Transactions on Information Theory", YEAR="2014", volume="60", number="4", pages="2346--2358", month="April", note="")