The troublesome kernel - on AI generated hallucinations in deep learning for inverse problems
Nina Gottschling
Nina Gottschling
• 2021-10-29
AbstractThere is overwhelming empirical evidence that Deep Learning (DL) leads to unstable methods in applications ranging from image classification and computer vision to voice recognition and automated diagnosis in medicine. Recently, a similar instability phenomenon has been discovered when DL is used to solve certain problems in computational science, namely, inverse problems in imaging. In this paper we present a comprehensive mathematical analysis explaining the many facets of the instability phenomenon in DL for inverse problems. These instabilities in particular also include false positives and negatives as well as AI hallucinations. Our main results not only explain why this phenomenon occurs, they also shed light as to why finding a cure for instabilities is so difficult in practice. Additionally, these theorems show that instabilities are typically not rare events -- rather, they can occur even when the measurements are subject to completely random noise -- and consequently how easy it can be to destabilize certain trained neural networks. Furthermore, we show how training typically encourages AI hallucinations and instabilities. We also examine the delicate balance between reconstruction performance and stability, and in particular, how DL methods may outperform state-of-the-art sparse regularization methods, but at the cost of instability. Finally, we demonstrate a counterintuitive phenomenon: training a neural network may generically not yield an optimal reconstruction method for an inverse problem.