On Langevin Dynamics in Machine Learning
Michael I. Jordan (UC Berkeley)
Abstract: Langevin diffusions are continuous-time stochastic processes that are based on the gradient of a potential function. As such they have many connections---some known and many still to be explored---to gradient-based machine learning. I'll discuss several recent results in this vein: (1) the use of Langevin-based algorithms in bandit problems; (2) the acceleration of Langevin diffusions; (3) how to use Langevin Monte Carlo without making smoothness assumptions. I'll present these results in the context of a general argument about the virtues of continuous-time perspectives in the analysis of discrete-time optimization and Monte Carlo algorithms.
bioinformaticsgame theoryinformation theorymachine learningneural and evolutionary computingclassical analysis and ODEsoptimization and controlstatistics theory
Audience: researchers in the topic
IAS Seminar Series on Theoretical Machine Learning
Series comments: Description: Seminar series focusing on machine learning. Open to all.
Register in advance at forms.gle/KRz8hexzxa5P4USr7 to receive Zoom link and password. Recordings of past seminars can be found at www.ias.edu/video-tags/seminar-theoretical-machine-learning
| Organizers: | Ke Li*, Sanjeev Arora |
| *contact for this listing |
