Inner-Loop-Free ADMM for Cryo-EM15 Jan 2019
Fast PET reconstruction: the home stretch11 Dec 2018
Self-Supervised Deep Active Accelerated MRI27 Nov 2018
Minimum Support Multi-Splines20 Nov 2018
Steer&Detect on Images 14 Nov 2017
Fundamental computational barriers in inverse problems and the mathematics of information27 Oct 2017
Two of the most influential recent developments in applied mathematics are neural networks and compressed sensing. Compressed sensing (e.g. via basis pursuit or lasso) has seen considerable success at solving inverse problems and neural networks are rapidly becoming commonplace in everyday life with use cases ranging from self driving cars to automated music production. The observed success of these approaches would suggest that solving the underlying mathematical model on a computer is both well understood and computationally efficient. We will demonstrate that this is not the case. Instead, we show the following paradox: it is impossible to design algorithms that solve these problems to one significant figure when given inaccurate input data, even when the inaccuracies can be made arbitrarily small. This will occur even when the input data is in many senses well conditioned and shows that every existing algorithm will fail on some simple inputs. Further analysis of the situation for neural networks leads to the following additional ‘paradoxes of deep learning’: (1) One cannot guarantee the existence of algorithms for accurately training the neural network, and (2) one can have 100% success rate on arbitrarily many test cases, yet uncountably many misclassifications on elements that are arbitrarily close to the training set. Explaining the apparent contradiction of the observed success when applying compressed sensing, lasso and neural networks to real world examples given the aforementioned non existence result will require the development of new mathematical ideas and tools. We shall explain some of these ideas and give further information on all of the above paradoxes during the talk.