BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Jerri Li (Microsoft Research)
DTSTART:20210219T160000Z
DTEND:20210219T171200Z
DTSTAMP:20260423T022715Z
UID:sss/16
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/sss/16/">Fas
 ter and Simpler Algorithms for List Learning</a>\nby Jerri Li (Microsoft R
 esearch) as part of Stochastics and Statistics Seminar Series\n\n\nAbstrac
 t\nThe goal of list learning is to understand how to learn basic statistic
 s of a dataset when it has been corrupted by an overwhelming fraction of o
 utliers. More formally\, one is given a set of points $S$\, of which an $\
 \alpha$-fraction $T$ are promised to be well-behaved. The goal is then to 
 output an $O(1 / \\alpha)$ sized list of candidate means\, so that one of 
 these candidates is close to the true mean of the points in $T$. In many w
 ays\, list learning can be thought of as the natural robust generalization
  of clustering mixture models. This formulation of the problem was first p
 roposed in Charikar-Steinhardt-Valiant STOC’17\, which gave the first po
 lynomial-time algorithm which achieved nearly-optimal error guarantees. Mo
 re recently\, exciting work of Cherapanamjeri-Mohanty-Yau FOCS’20 gave a
 n algorithm which ran in time $\\widetilde{O} (n d \\mathrm{poly} (1 / \\a
 lpha))$. In particular\, this runtime is nearly linear in the input size f
 or $1/\\alpha = O(1)$\, however\, the runtime quickly becomes impractical 
 for reasonably small $1/\\alpha$. Moreover\, both of these algorithms are 
 quite complicated.\n\nIn our work\, we have two main contributions. First\
 , we give a polynomial time algorithm for this problem which achieves opti
 mal error\, which is considerably simpler than the previously known algori
 thms. Second\, we then build off of these insights to develop a more sophi
 sticated algorithm based on lazy mirror descent which runs in time $\\wide
 tilde{O}(n d / \\alpha + 1/\\alpha^6)$\, and which also achieves optimal e
 rror. Our algorithm improves upon the runtime of previous work for all $1/
 \\alpha = O(sqrt(d))$. The goal of this talk is to give a more or less sel
 f-contained proof of the first\, and then explain at a high level how to u
 se these ideas to develop our faster algorithm.\n\nJoint work with Ilias D
 iakonikolas\, Daniel Kane\, Daniel Kongsgaard\, and Kevin Tian\n
LOCATION:https://researchseminars.org/talk/sss/16/
END:VEVENT
END:VCALENDAR
