BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Andrea Montanari (Stanford)
DTSTART:20200624T140000Z
DTEND:20200624T150000Z
DTSTAMP:20260423T021140Z
UID:MADPlus/10
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/MADPlus/10/"
 >The generalization error of overparametrized models: Insights from exact 
 asymptotics</a>\nby Andrea Montanari (Stanford) as part of MAD+\n\n\nAbstr
 act\nIn a canonical supervised learning setting\, we are given n data samp
 les\, each comprising a feature vector and a label\, or response variable.
  We are asked to learn a function f that can predict the the label associa
 ted to a new –unseen– feature vector. How is it possible that the mode
 l learnt from observed data generalizes to new points? Classical learning 
 theory assumes that data points are drawn i.i.d. from a common distributio
 n and argue that this phenomenon is a consequence of uniform convergence: 
 the training error is close to its expectation uniformly over all models i
 n a certain class. Modern deep learning systems appear to defy this viewpo
 int: they achieve training error that is significantly smaller than the te
 st error\, and yet generalize well to new data. I will present a sequence 
 of high-dimensional examples in which this phenomenon can be understood in
  detail. [Based on joint work with Song Mei\, Feng Ruan\, Youngtak Sohn\, 
 Jun Yan]\n
LOCATION:https://researchseminars.org/talk/MADPlus/10/
END:VEVENT
END:VCALENDAR
