BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Yury Polyanskiy (MIT)
DTSTART:20210226T160000Z
DTEND:20210226T171200Z
DTSTAMP:20260423T022712Z
UID:sss/17
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/sss/17/">Sel
 f-regularizing Property of Nonparametric Maximum Likelihood Estimator in M
 ixture Models</a>\nby Yury Polyanskiy (MIT) as part of Stochastics and Sta
 tistics Seminar Series\n\n\nAbstract\nIntroduced by Kiefer and Wolfowitz 1
 956\, the nonparametric maximum likelihood estimator (NPMLE) is a widely u
 sed methodology for learning mixture models and empirical Bayes estimation
 . Sidestepping the non-convexity in mixture likelihood\, the NPMLE estimat
 es the mixing distribution by maximizing the total likelihood over the spa
 ce of probability measures\, which can be viewed as an extreme form of ove
 r parameterization.\n\nIn this work we discover a surprising property of t
 he NPMLE solution. Consider\, for example\, a Gaussian mixture model on th
 e real line with a subgaussian mixing distribution. Leveraging complex-ana
 lytic techniques\, we show that with high probability the NPMLE based on a
  sample of size n has O(\\log n) atoms (mass points)\, significantly impro
 ving the deterministic upper bound of n due to Lindsay (1983). Notably\, a
 ny such Gaussian mixture is statistically indistinguishable from a finite 
 one with O(\\log n) components (and this is tight for certain mixtures). T
 hus\, absent any explicit form of model selection\, NPMLE automatically ch
 ooses the right model complexity\, a property we term self-regularization.
  Extensions to other exponential families are given. As a statistical appl
 ication\, we show that this structural property can be harnessed to bootst
 rap existing Hellinger risk bound of the (parametric) MLE for finite Gauss
 ian mixtures to the NPMLE for general Gaussian mixtures\, recovering a res
 ult of Zhang (2009). Time permitting\, we will discuss connections to appr
 oaching the optimal regret in empirical Bayes. This is based on joint work
  with Yihong Wu (Yale).\n
LOCATION:https://researchseminars.org/talk/sss/17/
END:VEVENT
END:VCALENDAR
