BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Jasper Lee (Brown University)
DTSTART:20210331T170000Z
DTEND:20210331T180000Z
DTSTAMP:20260423T021010Z
UID:TCSPlus/23
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/23/"
 >Optimal Sub-Gaussian Mean Estimation in $\\mathbb{R}$</a>\nby Jasper Lee 
 (Brown University) as part of TCS+\n\n\nAbstract\nWe revisit and settle a 
 fundamental problem in statistics: given access to independent samples fro
 m a 1D random variable (with finite but unknown mean and variance)\, what 
 is the best way to estimate the mean in the high probability regime\, in t
 erms of error convergence with respect to sample size? The conventional wi
 sdom is to use the empirical mean as our estimate. However\, it is known t
 hat the empirical mean can in fact have exponentially sub-optimal converge
 nce for certain heavy-tailed distributions. On the other hand\, the median
 -of-means estimator (invented and reinvented in various literature) does h
 ave sub-Gaussian convergence for all finite-variance distributions\, albei
 t only in the big-O sense with a sub-optimal multiplicative constant. The 
 natural remaining question then\, is whether it is possible to bridge the 
 gap\, and have an estimator that has optimal convergence with the right co
 nstant for all finite-variance distributions.\n\nIn this talk\, we answer 
 the question affirmatively by giving an estimator that converges with the 
 optimal constant inside the big-O\, up to a 1+o(1) multiplicative factor. 
 The estimator is also easy to compute. The convergence analysis involves d
 eriving tail bounds using linear and convex-concave programming dualities\
 , which may be of independent interest.\n\nBased on joint work with Paul V
 aliant.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/23/
END:VEVENT
END:VCALENDAR
