Deep Neural Networks with Trainable Activations and Controlled Lipschitz Constant
S. Aziznejad, H. Gupta, J. Campos, M. Unser
IEEE Transactions on Signal Processing, vol. 68, pp. 4688–4699, August 10, 2020.
We introduce a variational framework to learn the activation functions of deep neural networks. Our aim is to increase the capacity of the network while controlling an upper-bound of the actual Lipschitz constant of the input-output relation. To that end, we first establish a global bound for the Lipschitz constant of neural networks. Based on the obtained bound, we then formulate a variational problem for learning activation functions. Our variational problem is infinite-dimensional and is not computationally tractable. However, we prove that there always exists a solution that has continuous and piecewise-linear (linear-spline) activations. This reduces the original problem to a finite-dimensional minimization where an ℓ1 penalty on the parameters of the activations favors the learning of sparse nonlinearities. We numerically compare our scheme with standard ReLU network and its variations, PReLU and LeakyReLU and we empirically demonstrate the practical aspects of our framework.
@ARTICLE(http://bigwww.epfl.ch/publications/aziznejad2001.html, AUTHOR="Aziznejad, S. and Gupta, H. and Campos, J. and Unser, M.", TITLE="Deep Neural Networks with Trainable Activations and Controlled {L}ipschitz Constant", JOURNAL="{IEEE} Transactions on Signal Processing", YEAR="2020", volume="68", number="", pages="4688--4699", month="August 10,", note="")