Learning Convex Regularizers: Does Depth Really Help?
S. Neumayer
PhD Prize talk, Mathematics and Image Analysis (MIA'23), Berlin, Federal Republic of Germany, February 1-3, 2023.
In this talk, we will revisit the state-of-the-art in learned convex regularization. As comparison, we propose a regularizer based on a one-hidden-layer neural network with (almost) free-form activation functions. For training this sum-of-convex-ridges regularizer, we rely on connections to gradient based denoisers. Our numerical experiments indicate that this simple architecture already achieves the best performance, which is very different from the non-convex case. Interestingly, even when learning both the filters and the activation functions of our model, we recover wavelet- like filters and thresholding-like activation functions. These observations raise the question if the fundamental limit is already reached in the convex setting.
@INPROCEEDINGS(http://bigwww.epfl.ch/publications/neumayer2304.html,
AUTHOR="Neumayer, S.",
TITLE="Learning Convex Regularizers: {D}oes Depth Really Help?",
BOOKTITLE="Mathematics and Image Analysis ({MIA'23})",
YEAR="2023",
editor="",
volume="",
series="",
pages="",
address="Berlin, Federal Republic of Germany",
month="February 1-3,",
organization="",
publisher="",
note="PhD Prize talk")