Functional Optimization Methods for Machine Learning
M. Unser
Summer School on Mathematics and Machine Learning for Image Analysis 2024 (MMLIA'24), Bologna, Italian Republic, June 4-12, 2024.
In this mini-course, we show how various forms of supervised learning can be recast as optimization problems over suitable function spaces, subject to regularity constraints. Our family of regularization functionals has two components:
- a regularization operator, which can be composed with an optional projection mechanism (Radon transform), and
- a (semi-)norm, which may be Hilbertian (RKHS) or sparsity-promoting (total variation).
By invoking an abstract representer theorem, we obtain an explicit parametrization of the extremal points of the solution set. The latter translates into a concrete neuronal architecture and training procedure. We demonstrate the use of this variational formalism on a variety of examples, including several variants of spline-based regression. We also draw connections with classical kernel-based techniques and modern ReLU neural networks. Finally, we show how our framework is applicable to the learning of non-linearities in deep and not-so-deep networks.
Bibliography
-
M. Unser, "A Unifying Representer Theorem for Inverse Problems and Machine Learning," Foundations of Computational Mathematics, vol. 21, no. 4, pp. 941–960, August, 2021.
-
M. Unser, "From Kernel Methods to Neural Networks: A Unifying Variational Formulation," Foundations of Computational Mathematics, in press.
@INPROCEEDINGS(http://bigwww.epfl.ch/publications/unser2402.html, AUTHOR="Unser, M.", TITLE="Functional Optimization Methods for Machine Learning", BOOKTITLE="Summer School on Mathematics and Machine Learning for Image Analysis 2024 ({MMLIA'24})", YEAR="2024", editor="", volume="", series="", pages="", address="Bologna, Italian Republic", month="June 4-12,", organization="", publisher="", note="")