A Representer Theorem for Deep Neural Networks
M. Unser
Journal of Machine Learning Research, vol. 20, no. 110, pp. 1–30, 2019.
We propose to optimize the activation functions of a deep neural network by adding a corresponding functional regularization to the cost function. We justify the use of a second-order total-variation criterion. This allows us to derive a general representer theorem for deep neural networks that makes a direct connection with splines and sparsity. Specifically, we show that the optimal network configuration can be achieved with activation functions that are nonuniform linear splines with adaptive knots. The bottom line is that the action of each neuron is encoded by a spline whose parameters (including the number of knots) are optimized during the training procedure. The scheme results in a computational structure that is compatible with existing deep-ReLU, parametric ReLU, APL (adaptive piecewise-linear) and MaxOut architectures. It also suggests novel optimization challenges and makes an explicit link with ℓ1 minimization and sparsity-promoting techniques.
@ARTICLE(http://bigwww.epfl.ch/publications/unser1901.html, AUTHOR="Unser, M.", TITLE="A Representer Theorem for Deep Neural Networks", JOURNAL="Journal of Machine Learning Research", YEAR="2019", volume="20", number="110", pages="1--30", month="", note="")