BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Andrea Montanari (Stanford)
DTSTART:20200610T140000Z
DTEND:20200610T150000Z
DTSTAMP:20260423T035418Z
UID:MADPlus/7
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/MADPlus/7/">
 The generalization error of overparametrized models: Insights from exact a
 symptotics</a>\nby Andrea Montanari (Stanford) as part of MAD+\n\n\nAbstra
 ct\nIn a canonical supervised learning setting\, we are given n data sampl
 es\, each comprising a feature vector and a label\, or response variable. 
 We are asked to learn a function f that can predict the the label associat
 ed to a new –unseen– feature vector. How is it possible that the model
  learnt from observed data generalizes to new points? Classical learning t
 heory assumes that data points are drawn i.i.d. from a common distribution
  and argue that this phenomenon is a consequence of uniform convergence: t
 he training error is close to its expectation uniformly over all models in
  a certain class. Modern deep learning systems appear to defy this viewpoi
 nt: they achieve training error that is significantly smaller than the tes
 t error\, and yet generalize well to new data. I will present a sequence o
 f high-dimensional examples in which this phenomenon can be understood in 
 detail. [Based on joint work wit\n
LOCATION:https://researchseminars.org/talk/MADPlus/7/
END:VEVENT
END:VCALENDAR
