Representer theorems for machine learning and inverse problems
Michael Unser (École polytechnique fédérale de Lausanne, CH)
Abstract: Regularization addresses the ill-posedness of the training problem in machine learning or the reconstruction of a signal from a limited number of measurements. The standard strategy consists in augmenting the original cost functional by an energy that penalizes solutions with undesirable behaviour. In this presentation, I will present a general representer theorem that characterizes the solutions of a remarkably broad class of optimization problems in Banach spaces and helps us understand the effect of regularization. I will then use the theorem to retrieve some classical characterizations such as the celebrated representer theorem of machine leaning for RKHS, Tikhonov regularization, representer theorems for sparsity promoting functionals, as well as a few new ones, including a result for deep neural networks.
analysis of PDEsfunctional analysisgeneral mathematicsnumerical analysisoptimization and controlprobabilitystatistics theory
Audience: researchers in the topic
One World seminar: Mathematical Methods for Arbitrary Data Sources (MADS)
Series comments: Description: Research seminar on mathematics for data
The lecture series will collect talks on mathematical disciplines related to all kind of data, ranging from statistics and machine learning to model-based approaches and inverse problems. Each pair of talks will address a specific direction, e.g., a NoMADS session related to nonlocal approaches or a DeepMADS session related to deep learning.
Approximately 15 minutes prior to the beginning of the lecture, a zoom link will be provided on the official website and via mailing list. For further details please visit our webpage.
| Organizers: | Leon Bungert*, Martin Burger, Antonio Esposito*, Janic Föcke, Daniel Tenbrinck, Philipp Wacker |
| *contact for this listing |
