Biomedical Imaging Group
Logo EPFL
    • Splines Tutorials
    • Splines Art Gallery
    • Wavelets Tutorials
    • Image denoising
    • ERC project: FUN-SP
    • Sparse Processes - Book Preview
    • ERC project: GlobalBioIm
    • The colored revolution of bioimaging
    • Deconvolution
    • SMLM
    • One-World Seminars: Representer theorems
    • A Unifying Representer Theorem
Follow us on Twitter.
Join our Github.
Masquer le formulaire de recherche
Menu
BIOMEDICAL IMAGING GROUP (BIG)
Laboratoire d'imagerie biomédicale (LIB)
  1. School of Engineering STI
  2. Institute IEM
  3.  LIB
  4.  Image Reconstruction
  • Laboratory
    • Laboratory
    • Laboratory
    • People
    • Jobs and Trainees
    • News
    • Events
    • Seminars
    • Resources (intranet)
    • Twitter
  • Research
    • Research
    • Researchs
    • Research Topics
    • Talks, Tutorials, and Reviews
  • Publications
    • Publications
    • Publications
    • Database of Publications
    • Talks, Tutorials, and Reviews
    • EPFL Infoscience
  • Code
    • Code
    • Code
    • Demos
    • Download Algorithms
    • Github
  • Teaching
    • Teaching
    • Teaching
    • Courses
    • Student projects
  • Splines
    • Teaching
    • Teaching
    • Splines Tutorials
    • Splines Art Gallery
    • Wavelets Tutorials
    • Image denoising
  • Sparsity
    • Teaching
    • Teaching
    • ERC project: FUN-SP
    • Sparse Processes - Book Preview
  • Imaging
    • Teaching
    • Teaching
    • ERC project: GlobalBioIm
    • The colored revolution of bioimaging
    • Deconvolution
    • SMLM
  • Machine Learning
    • Teaching
    • Teaching
    • One-World Seminars: Representer theorems
    • A Unifying Representer Theorem

Towards Trustworthy Deep Learning for Image Reconstruction

A. Goujon

École polytechnique fédérale de Lausanne, EPFL Thesis no. 10667 (2024), 370 p., March 8, 2024.


The remarkable ability of deep learning (DL) models to approximate high-dimensional functions from samples has sparked a revolution across numerous scientific and industrial domains that cannot be overemphasized. In sensitive applications, the good performance of DL is unfortunately sometimes overshadowed by unexpected behaviors, including hallucinations in medical image reconstruction. Serious concerns have thus been raised regarding the extent to which one can trust the output of DL models. Restoring trust is challenging since the same depth that fuels the performance causes DL models to be black boxes. The parameters of the model are indeed only remotely connected to the function they parameterize, and enforcing constraints on the model to obtain guarantees on its output usually wipes out the performance boost of DL. In this thesis, we pursue the goal of improving the trustworthiness of several DL methods while maintaining performance. Our approach tackles the problem via the design of expressive, stable and interpretable spline-based parameterizations across various contexts.

The contributions of this thesis are divided into three parts. In the first part, we concentrate on parameterizations for low-dimensional regression tasks. There, depth is not necessarily beneficial and one can have it all—stability, expressivity and interpretability—with linear combinations of well-chosen atoms. This is first shown with the design of shortest-support multi-spline bases, and then with the study of the stability of a local parameterization of continuous and piecewise-linear functions (CPWL).

In the second part, we focus on deep parameterizations to cope with higher-dimensional problems. We first study the composition operation within CPWL neural networks (NN) and give some new insights into the role of the activation function in the expressivity of the NN. We then propose to use Lipschitz-constrained learnable linear spline activations to build expressive and provably stable deep NN. We characterize some universal properties of our framework, develop an efficient procedure to train the activations under the constraint, and, lastly, show experimental improvements over competing frameworks with similar constraints on various tasks, including plug-and-play image reconstruction with provably nonexpansive denoisers.

In the third and final part, we refine the parameterization by focusing on image reconstruction tasks. We propose a framework to learn convex regularizers, which rely on our learnable Lipschitz-constrained spline activations. The parameterization yields lightweight and transparent—in contrast to black boxes—models with theoretical guarantees on the reconstruction. Our method exhibits state-of-the-art performance for CT and MRI reconstruction among convex regularization methods. Lastly, we extend the framework to learn weakly-convex regularizers to boost performance while maintaining most guarantees.

@PHDTHESIS(http://bigwww.epfl.ch/publications/goujon2402.html,
AUTHOR="Goujon, A.",
TITLE="Towards Trustworthy Deep Learning for Image Reconstruction",
SCHOOL="{\'{E}}cole polytechnique f{\'{e}}d{\'{e}}rale de {L}ausanne
	({EPFL})",
YEAR="2024",
type="{EPFL} Thesis no.\ 10667 (2024), 370 p.",
address="",
month="March 8,",
note="")
© 2024 Goujon. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from Goujon. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
  • Laboratory
  • Research
  • Publications
    • Database of Publications
    • Talks, Tutorials, and Reviews
    • EPFL Infoscience
  • Code
  • Teaching
Logo EPFL, Ecole polytechnique fédérale de Lausanne
Emergencies: +41 21 693 3000 Services and resources Contact Map Webmaster email

Follow EPFL on social media

Follow us on Facebook. Follow us on Twitter. Follow us on Instagram. Follow us on Youtube. Follow us on LinkedIn.
Accessibility Disclaimer Privacy policy

© 2025 EPFL, all rights reserved