Biomedical Imaging Group

Seminars

CONTENTS |

Seminars |

Adaptive regularization for three-dimensional optical diffraction tomography17 Dec 2019

About the use of non-imaging data to improve domain adaptation for spinal cord segmentation on MRI26 Nov 2019

Lagrangian Tracking of Bubbles Entrained by a Plunging Jet19 Nov 2019

Multigrid Methods for Helmholtz equation and its application in Optical Diffraction Tomography05 Nov 2019

Efficient methods for solving large scale inverse problems17 Oct 2019

Generating Sparse Stochastic Processes24 Sep 2019

Sparse signal reconstruction using variational methods with fractional derivatives10 Sep 2019

Multivariate Haar wavelets and B-splines13 Aug 2019

Deep Learning for Magnetic Resonance Image Reconstruction and Analysis06 Aug 2019

The Interpolation Problem with TV(2) Regularization30 Jul 2019

Duality and Uniqueness for the gTV problem.23 Jul 2019

An Introduction to Convolutional Neural Networks for Inverse Problems in Imaging09 Jul 2019

Multiple Kernel Regression with Sparsity Constraints18 Jun 2019

Optimal Spline Generators for Derivative Sampling18 Jun 2019

Total variation minimization through Domain Decomposition28 May 2019

Cell detection by functional inverse diffusion and non-negative group sparsity07 May 2019

Can neural networks always be trained? On the boundaries of deep learning06 May 2019

Deep learning has emerged as a competitive new tool in image reconstruction. However, recent results demonstrate such methods are typically highly unstable – tiny, almost undetectable perturbations cause severe artefacts in the reconstruction, a major concern in practice. This is paradoxical given the existence of stable state-of-the-art methods for these problems. Thus, approximation theoretical results non-constructively imply the existence of stable and accurate neural networks. Hence the fundamental question: Can we explicitly construct/train stable and accurate neural networks for image reconstruction? I will discuss two results in this direction. The first is a negative result, saying such constructions are in general impossible, even given access to the solutions of common optimisation algorithms such as basis pursuit. The second is a positive result, saying that under sparsity assumptions, such neural networks can be constructed. These neural networks are stable and theoretically competitive with state-of-the-art results from other methods. Numerical examples of competitive performance are also provided.

Measure Digital, Reconstruct Analog16 Apr 2019

Deep Learning for Non-Linear Inverse Problems02 Apr 2019

Numerical Investigation of Continuous-Domain Lp-norm Regularization in Generalized Interpolation19 Feb 2019

Inner-Loop-Free ADMM for Cryo-EM15 Jan 2019

Fast PET reconstruction: the home stretch11 Dec 2018

Self-Supervised Deep Active Accelerated MRI27 Nov 2018

Minimum Support Multi-Splines20 Nov 2018

Sparse Coding with Projected Gradient Descent for Inverse Problems 23 Oct 2018

Adversarially-Sandwiched VAEs for Inverse Problems02 Oct 2018

PSF-Extractor: from fluorescent beads measurements to continuous PSF model11 Sep 2018

Analysis of Planar Shapes through Shape Dictionary Learning with an Extension to Splines28 Aug 2018

Complex-order scale-invariant operators and self-similar processes21 Aug 2018

Variational Framework for Continuous Angular Refinement and Reconstruction in Cryo-EM14 Aug 2018

Looking beyond Pixels: Theory, Algorithms and Applications of Continuous Sparse Recovery07 Aug 2018

© 2010 EPFL • webmaster.big@epfl.ch • 26.01.2010