BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Pierre Nyquist (Chalmers University of Technology & University of 
 Gothenburg)
DTSTART:20240221T121500Z
DTEND:20240221T130000Z
DTSTAMP:20260422T155052Z
UID:gbgstats/42
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/gbgstats/42/
 ">Large deviations for Markov chain Monte Carlo methods: the surprisingly 
 curious case of Metropolis-Hastings.</a>\nby Pierre Nyquist (Chalmers Univ
 ersity of Technology & University of Gothenburg) as part of Gothenburg sta
 tistics seminar\n\nLecture held in MVL14.\n\nAbstract\nMarkov chain Monte 
 Carlo (MCMC) methods have become the workhorse for numerical computations 
 in a range of scientific disciplines\, e.g.\, computational chemistry and 
 physics\, statistics\, and machine learning. The performance of MCMC metho
 ds has therefore become an important topic at the intersection of probabil
 ity theory and (computational) statistics: e.g.\, when the underlying dist
 ribution one is trying to sample from becomes sufficiently complex\, conve
 rgence speed and/or the cost per iteration becomes an issue for most MCMC 
 methods. \n\nThe analysis\, and subsequently design\, of MCMC methods has 
 to a large degree relied on classical tools used to determine the speed of
  convergence of Markov chains\, e.g.\, mixing times\, spectral gap and fun
 ctional inequalities (Poincaré\, log-Sobolev). An alternative avenue is t
 o use the theory of large deviations for empirical measures. In this talk 
 I will first give a general outline of this approach to analysing MCMC met
 hods\, along with some recent examples. I will then consider the specific 
 case of the Metropolis-Hastings algorithm\, the most classical amongst all
  MCMC methods and a foundational building block for many more advanced met
 hods. Despite the simplicity of this method\, it turns out that the theore
 tical analysis of it is still a rich area\, and from the large deviation p
 erspective it is surprisingly difficult to treat. As a first step we show 
 a large deviation principle for the underlying Markov chain\, extending th
 e celebrated Donsker-Varadhan theory. Time permitted I will also discuss o
 ngoing and future work on using this result for better understanding of bo
 th the Metropolis-Hastings method and more advanced methods\, such as appr
 oximate Bayesian computation (ABC-MCMC) and the Metropolis-adjusted Langev
 in algorithm (MALA).\n\nThe talk will be self-contained and no prior knowl
 edge of either MCMC methods or large deviations is required.\n\nThe talk i
 s primarily based on join work with Federica Milinanni (KTH).\n
LOCATION:https://researchseminars.org/talk/gbgstats/42/
END:VEVENT
END:VCALENDAR
