Learning Sparsifying Regularisers
S. Neumayer
Proceedings of the Eleventh Conference on Applied Inverse Problems (AIP'23), Göttingen, Federal Republic of Germany, September 4-8, 2023, pp. 175.
Solving inverse problems is possible, for example, by using variational models. First, we discuss a convex regularizer based on a one-hidden-layer neural network with (almost) free-form activation functions. Our numerical experiments have shown that this simple architecture already achieves state-of-the-art performance in the convex regime. This is very different from the non-convex case, where more complex models usually result in better performance. Inspired by this observation, we discuss an extension of our approach within the convex non-convex framework. Here, the regularizer can be non-convex, but the overall objective has to remain convex. This maintains the nice optimization properties while allowing to significantly boost the performance. Our numerical results show that this convex-energy-based approach is indeed able to outperform the popular BM3D denoiser on the BSD68 test set for various noise scales.
@INPROCEEDINGS(http://bigwww.epfl.ch/publications/neumayer2303.html, AUTHOR="Neumayer, S.", TITLE="Learning Sparsifying Regularisers", BOOKTITLE="Proceedings of the Eleventh Conference on Applied Inverse Problems ({AIP'23})", YEAR="2023", editor="", volume="", series="", pages="175", address="G{\"{o}}ttingen, Federal Republic of Germany", month="September 4-8,", organization="", publisher="", note="")