Splines and Imaging: From Compressed Sensing to Deep Neural Networks
M. Unser
UCLA Workshop on Deep Learning and Medical Applications (DLMA'20), Virtual, January 27-31, 2020.
Our intent is to demonstrate the optimality of splines for the resolution of inverse problems in imaging and the design of deep neural networks. To that end, we first describe a recent representer theorem that states that the extremal points of a broad class of linear inverse problems with generalized total-variation regularization are adaptive splines whose type is linked to the underlying regularization operator L. For instance, when L is n-th derivative (resp., Laplacian) operator, the optimal reconstruction is a non-uniform polynomial (resp., polyharmonic) spline with the smallest possible number of adaptive knots.
The crucial observation is that such continuous-domain solutions are intrinsically sparse, and hence compatible with the kind of formulation (and algorithms) used in compressed sensing.
We then make the link with current learning techniques by applying the theorem to optimize the shape of individual activations in a deep neural network. By selecting the regularization functional to be the 2nd-order total variation, we obtain an "optimal" deep-spline network whose activations are piecewise-linear splines with a few adaptive knots. Since each spline knot can be encoded with a ReLU unit, this provides a variational justification of the popular ReLU architecture. It also suggests some new computational challenges for the determination of the optimal activations involving linear combinations of ReLUs.
@INPROCEEDINGS(http://bigwww.epfl.ch/publications/unser2006.html, AUTHOR="Unser, M.", TITLE="Splines and Imaging: {F}rom Compressed Sensing to Deep Neural Networks", BOOKTITLE="{UCLA} Workshop on Deep Learning and Medical Applications ({DLMA'20})", YEAR="2020", editor="", volume="", series="", pages="", address="Virtual", month="January 27-31,", organization="", publisher="", note="")