Deep Neural Nets: The Spline Perspective
M. Unser
2021 SIAM Annual Meeting (AN'21), Virtual, July 19-23, 2021.
Supervised learning is a fundamentally ill-posed problem. In practice, this indetermination is dealt with by imposing constraints on the solution; these are either implicit, as in neural networks, or explicit via the use of a regularization functional. In this talk, I advocate a variational formulation that is supported by an "abstract" representer theorem that characterizes the solution of a broad class of functional optimization problems. I then use this theorem to derive the most prominent classical learning algorithms—e.g., kernel-based techniques and smoothing splines—as well as their "sparse" counterparts. This leads to the identification of sparse adaptive splines, which have some remarkable properties. I then show how such splines are relevant to the investigation of neural networks. In particular, they give us a functional interpretation of shallow, infinite-width ReLU neural nets. Sparse adaptive splines also turn out to be ideally suited for the specification of deep neural networks with free-form activations.
@INPROCEEDINGS(http://bigwww.epfl.ch/publications/unser2103.html, AUTHOR="Unser, M.", TITLE="Deep Neural Nets: {T}he Spline Perspective", BOOKTITLE="2021 {SIAM} Annual Meeting ({AN'21})", YEAR="2021", editor="", volume="", series="", pages="", address="Virtual", month="July 19-23,", organization="", publisher="", note="")