(Theoretical Project) Convergence Analysis of Grid-based Algorithms for Sparse Learning
Broadly speaking, machine learning algorithms aim to find an input-output relationship based on a (usually large) set of training examples. This is typically achieved by solving an optimization problem with a regularization term that reduces overfitting. In particular, sparsity-promoting regularizers lead to model simplifications and make the handling of large training datasets easier. We recently developed a theory for sparse learning that characterizes the form of the solution to this optimization problem in the continuous domain (i.e., the learned output is a parametric function). Based on these theoretical results, a grid-based discretization scheme has been proposed and implemented. The goal of this project is to study the convergence of these discretized problems in relation to the underlying continuous-domain problem in different setups. The ideal outcome would be to obtain bounds for the error rate as the grid goes finer. The student should have a solid understanding of functional analysis and convex optimization.
© 2019 EPFL • webmaster.big@epfl.ch • 13.02.2019