Biomedical Imaging Group

Seminars

CONTENTS |

Seminars |

Variational Framework for Continuous Angular Refinement and Reconstruction in Cryo-EM14 Aug 2018

Looking beyond Pixels: Theory, Algorithms and Applications of Continuous Sparse Recovery07 Aug 2018

A L1 representer theorem of vector-valued learning17 Jul 2018

Computational Super-Sectioning for Single-Slice Structured-Illumination Microscopy 19 Jun 2018

Theoretical and Numerical Analysis of Super-Resolution without Grid19 Jun 2018

Fast rotational dictionary learning using steerability 08 May 2018

Hybrid spline dictionaries for continuous-domain inverse problems24 Apr 2018

Fast Multiresolution Reconstruction for Cryo-EM17 Apr 2018

Direct Reconstruction of Clipped Peaks in Bandlimited OFDM Signals13 Mar 2018

Sparsity-based techniques for diffraction tomography27 Feb 2018

Structured Illumination and the Analysis of Single Molecules in Cells09 Feb 2018

Periodic Splines and Gaussian Processes for the Resolution of Linear Inverse Problems30 Jan 2018

Fast Piecewise-Affine Motion Estimation Without Segmentation19 Dec 2017

Continuous Representations in Bioimage Analysis: a Bridge from Pixels to the Real World12 Dec 2017

Steer&Detect on Images 14 Nov 2017

Fundamental computational barriers in inverse problems and the mathematics of information27 Oct 2017

Two of the most influential recent developments in applied mathematics are neural networks and compressed sensing. Compressed sensing (e.g. via basis pursuit or lasso) has seen considerable success at solving inverse problems and neural networks are rapidly becoming commonplace in everyday life with use cases ranging from self driving cars to automated music production. The observed success of these approaches would suggest that solving the underlying mathematical model on a computer is both well understood and computationally efficient. We will demonstrate that this is not the case. Instead, we show the following paradox: it is impossible to design algorithms that solve these problems to one significant figure when given inaccurate input data, even when the inaccuracies can be made arbitrarily small. This will occur even when the input data is in many senses well conditioned and shows that every existing algorithm will fail on some simple inputs. Further analysis of the situation for neural networks leads to the following additional ‘paradoxes of deep learning’: (1) One cannot guarantee the existence of algorithms for accurately training the neural network, and (2) one can have 100% success rate on arbitrarily many test cases, yet uncountably many misclassifications on elements that are arbitrarily close to the training set. Explaining the apparent contradiction of the observed success when applying compressed sensing, lasso and neural networks to real world examples given the aforementioned non existence result will require the development of new mathematical ideas and tools. We shall explain some of these ideas and give further information on all of the above paradoxes during the talk.

Variational use of B-splines and Kernel Based Functions27 Oct 2017

Deep learning based data manifold projection - a new regularization for inverse problems17 Oct 2017

GlobalBioIm Lib - v2: new tools, more flexibility, and improved composition rules.03 Oct 2017

Fractional Integral transforms and Time-Frequency Representations02 Jun 2017

First steps toward fast PET reconstruction30 May 2017

Lipid membranes and surface reconstruction - a biologically inspired method for 3D segmentation16 May 2017

Optical Diffraction Tomography: Principles and Algorithms09 May 2017

Compressed Sensing for Dose Reduction in STEM Tomography11 Apr 2017

Chasing Mycobacteria10 Apr 2017

Multifractal analysis for signal and image classification23 Mar 2017

Inverse problems and multimodality for biological imaging28 Feb 2017

© 2010 EPFL • webmaster.big@epfl.ch • 26.01.2010