BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Lars Ruthotto (Emory University\, US)
DTSTART:20200518T120000Z
DTEND:20200518T124500Z
DTSTAMP:20260423T035735Z
UID:OWMADS/4
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OWMADS/4/">M
 achine learning meets optimal transport: old solutions for new problems an
 d vice versa</a>\nby Lars Ruthotto (Emory University\, US) as part of One 
 World seminar: Mathematical Methods for Arbitrary Data Sources (MADS)\n\n\
 nAbstract\nThis talk presents new connections between optimal transport (O
 T)\, which has been a critical problem in applied mathematics for centurie
 s\, and machine learning (ML)\, which has been receiving enormous attentio
 n in the past decades. In recent years\, OT and ML have become increasingl
 y intertwined. This talk contributes to this booming intersection by provi
 ding efficient and scalable computational methods for OT and ML.\nThe firs
 t part of the talk shows how neural networks can be used to efficiently ap
 proximate the optimal transport map between two densities in high dimensio
 ns. To avoid the curse-of-dimensionality\, we combine Lagrangian and Euler
 ian viewpoints and employ neural networks to solve the underlying Hamilton
 -Jacobi-Bellman equation. Our approach avoids any space discretization and
  can be implemented in existing machine learning frameworks. We present nu
 merical results for OT in up to 100 dimensions and validate our solver in 
 a two-dimensional setting. \nThe second part of the talk shows how optimal
  transport theory can improve the efficiency of training generative models
  and density estimators\, which are critical in machine learning. We consi
 der continuous normalizing flows (CNF) that have emerged as one of the mos
 t promising approaches for variational inference in the ML community. Our 
 numerical implementation is a discretize-optimize method whose forward pro
 blem relies on manually derived gradients and Laplacian of the neural netw
 ork and uses automatic differentiation in the optimization. In common benc
 hmark challenges\, our method outperforms state-of-the-art CNF approaches 
 by reducing the network size by 8x\, accelerate the training by 10x- 40x a
 nd allow 30x-50x faster inference.\n
LOCATION:https://researchseminars.org/talk/OWMADS/4/
END:VEVENT
END:VCALENDAR
