Deep Learning Meets Sparse Regularization—A Signal Processing Perspective
R. Parhi, R.D. Nowak
IEEE Signal Processing Magazine, vol. 40, no. 6, pp. 63–74, September 2023.
Deep learning (DL) has been wildly successful in practice, and most of the state-of-the-art machine learning methods are based on neural networks (NNs). Lacking, however, is a rigorous mathematical theory that adequately explains the amazing performance of deep NNs (DNNs). In this article, we present a relatively new mathematical framework that provides the beginning of a deeper understanding of DL. This framework precisely characterizes the functional properties of NNs that are trained to fit to data. The key mathematical tools that support this framework include transform-domain sparse regularization, the Radon transform of computed tomography, and approximation theory, which are all techniques deeply rooted in signal processing. This framework explains the effect of weight decay regularization in NN training, use of skip connections and low-rank weight matrices in network architectures, role of sparsity in NNs, and explains why NNs can perform well in high-dimensional problems.
@ARTICLE(http://bigwww.epfl.ch/publications/parhi2304.html, AUTHOR="Parhi, R. and Nowak, R.D.", TITLE="Deep Learning Meets Sparse Regularization---{A} Signal Processing Perspective", JOURNAL="{IEEE} Signal Processing Magazine", YEAR="2023", volume="40", number="6", pages="63--74", month="September", note="")