Scaling Optimal Transport for High dimensional Learning
Gabriel Peyré (CNRS, Ecole Normale Supérieure)
Abstract: Optimal transport (OT) has recently gained lot of interest in machine learning. It is a natural tool to compare in a geometrically faithful way probability distributions. It finds applications in both supervised learning (using geometric loss functions) and unsupervised learning (to perform generative model fitting). OT is however plagued by the curse of dimensionality, since it might require a number of samples which grows exponentially with the dimension. In this talk, I will review entropic regularization methods which define geometric loss functions approximating OT with a better sample complexity. More information and references can be found on the website of our book Computational Optimal Transport.
analysis of PDEsfunctional analysisgeneral mathematicsnumerical analysisoptimization and controlprobabilitystatistics theory
Audience: researchers in the topic
One World seminar: Mathematical Methods for Arbitrary Data Sources (MADS)
Series comments: Description: Research seminar on mathematics for data
The lecture series will collect talks on mathematical disciplines related to all kind of data, ranging from statistics and machine learning to model-based approaches and inverse problems. Each pair of talks will address a specific direction, e.g., a NoMADS session related to nonlocal approaches or a DeepMADS session related to deep learning.
Approximately 15 minutes prior to the beginning of the lecture, a zoom link will be provided on the official website and via mailing list. For further details please visit our webpage.
| Organizers: | Leon Bungert*, Martin Burger, Antonio Esposito*, Janic Föcke, Daniel Tenbrinck, Philipp Wacker |
| *contact for this listing |
