BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Mahdi Soltanolkotabi (USC)
DTSTART:20200708T140000Z
DTEND:20200708T150000Z
DTSTAMP:20260423T021141Z
UID:MADPlus/11
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/MADPlus/11/"
 >Learning via early stopping and untrained neural nets</a>\nby Mahdi Solta
 nolkotabi (USC) as part of MAD+\n\n\nAbstract\nModern neural networks are 
 typically trained in an over-parameterized regime where the parameters of 
 the model far exceed the size of the training data. Such neural networks i
 n principle have the capacity to (over)fit any set of labels including sig
 nificantly corrupted ones. Despite this (over)fitting capacity\, over-para
 meterized networks have an intriguing robustness capability: they are surp
 risingly robust to label noise when first order methods with early stoppin
 g are used to train them. Even more surprising\, one can remove noise and 
 corruption from a natural image without using any training data what-so-ev
 er\, by simply fitting (via gradient descent) a randomly initialized\, ove
 r-parameterized convolutional generator to a single corrupted image. In th
 is talk I will first present theoretical results aimed at explaining the r
 obustness capability of neural networks when trained via early-stopped gra
 dient descent. I will then present results towards demystifying untrained 
 networks for image reconstruction/restoration tasks such as denoising and 
 those arising in inverse problems such as compressive sensing.\n
LOCATION:https://researchseminars.org/talk/MADPlus/11/
END:VEVENT
END:VCALENDAR
