Biomedical Imaging Group
Logo EPFL
    • Splines Tutorials
    • Splines Art Gallery
    • Wavelets Tutorials
    • Image denoising
    • ERC project: FUN-SP
    • Sparse Processes - Book Preview
    • ERC project: GlobalBioIm
    • The colored revolution of bioimaging
    • Deconvolution
    • SMLM
    • One-World Seminars: Representer theorems
    • A Unifying Representer Theorem
Follow us on Twitter.
Join our Github.
Masquer le formulaire de recherche
Menu
BIOMEDICAL IMAGING GROUP (BIG)
Laboratoire d'imagerie biomédicale (LIB)
  1. School of Engineering STI
  2. Institute IEM
  3.  LIB
  4.  Seminars
  • Laboratory
    • Laboratory
    • Laboratory
    • People
    • Jobs and Trainees
    • News
    • Events
    • Seminars
    • Resources (intranet)
    • Twitter
  • Research
    • Research
    • Researchs
    • Research Topics
    • Talks, Tutorials, and Reviews
  • Publications
    • Publications
    • Publications
    • Database of Publications
    • Talks, Tutorials, and Reviews
    • EPFL Infoscience
  • Code
    • Code
    • Code
    • Demos
    • Download Algorithms
    • Github
  • Teaching
    • Teaching
    • Teaching
    • Courses
    • Student projects
  • Splines
    • Teaching
    • Teaching
    • Splines Tutorials
    • Splines Art Gallery
    • Wavelets Tutorials
    • Image denoising
  • Sparsity
    • Teaching
    • Teaching
    • ERC project: FUN-SP
    • Sparse Processes - Book Preview
  • Imaging
    • Teaching
    • Teaching
    • ERC project: GlobalBioIm
    • The colored revolution of bioimaging
    • Deconvolution
    • SMLM
  • Machine Learning
    • Teaching
    • Teaching
    • One-World Seminars: Representer theorems
    • A Unifying Representer Theorem

Seminars


Seminar 00354.html

On Radon-Domain BV Spaces: The Native Spaces for Shallow Neural Networks
Rahul Parhi

Meeting • 2022-09-27

Abstract
Neural networks are not well understood mathematically and their success in many science and engineering applications is usually only backed by empirical evidence. In this talk, we will discuss studying neural networks from first principles. We use tools from variational spline theory to mathematically understand neural networks. In particular, we view neural networks as a type of spline. We propose and study a new family of Banach spaces, which are bounded variation (BV) spaces defined via the Radon transform. These are the “native spaces” for neural networks. We show that finite-width neural networks are solutions to data-fitting variational problems over these spaces. Moreover, these variational problems can be recast as finite-dimensional neural network training problems with regularization schemes related to weight decay and path-norm regularization, giving theoretical insight into these common regularization methods for neural networks. The Radon-domain BV spaces are also interesting from the perspective of functional analysis and statistical estimation. The best approximation and estimation error rates of these spaces are (essentially) independent of the input dimension, while the best linear approximation and estimation error rates suffer the curse of dimensionality. The Radon-domain BV spaces contain functions that are very smooth in all directions except (perhaps) a few directions. The anisotropic nature of these spaces distinguishes them from classical function spaces studied in analysis.
  • Laboratory
    • People
    • Jobs and Trainees
    • News
    • Events
    • Seminars
    • Resources (intranet)
    • Twitter
  • Research
  • Publications
  • Code
  • Teaching
Logo EPFL, Ecole polytechnique fédérale de Lausanne
Emergencies: +41 21 693 3000 Services and resources Contact Map Webmaster email

Follow EPFL on social media

Follow us on Facebook. Follow us on Twitter. Follow us on Instagram. Follow us on Youtube. Follow us on LinkedIn.
Accessibility Disclaimer Privacy policy

© 2023 EPFL, all rights reserved