BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Johan Jonasson (Chalmers University and University of Gothenburg)
DTSTART:20230119T141600Z
DTEND:20230119T150000Z
DTSTAMP:20260422T155025Z
UID:gbgstats/6
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/gbgstats/6/"
 >Noise sensitivity/stability for deep Boolean neural nets</a>\nby Johan Jo
 nasson (Chalmers University and University of Gothenburg) as part of Gothe
 nburg statistics seminar\n\nLecture held in MVL14.\n\nAbstract\nA well-kno
 wn and ubiquitous property of neural net classifiers is that they can be f
 ooled into misclassifying some objects by changing the input in tiny ways 
 that are indistinguishable for the human eye. These changes can be adversa
 rial\, but sometimes they can be just random noise. This makes it interest
 ing to ask if this property is something that almost all neural nets have 
 and\, when they do\, why that is. There are good heuristic explanations\, 
 but to prove mathematically rigorous results seems very difficult in gener
 al. Here we prove some first results on various toy models. We treat our q
 uestions within the framework of the established field of noise sensitivit
 y/stability. What we prove can roughly be stated as:\n \n<ul><li>\nA suffi
 ciently deep fully connected network with sufficiently wide layers and iid
  Gaussian weights is noise sensitive\, i.e. an arbitrarily small random no
 ise makes the predicted class of a binary input string before and after th
 e noise is added virtually independent. If one imposes correlations on the
  weights corresponding to the same input features\, this still holds unles
 s the correlation is very close to 1.</li>\n<li>\nNeural nets consisting o
 f only convolutional layers may or may not be noise sensitive and we prese
 nt examples of both behaviours.</li>\n</ul>\n
LOCATION:https://researchseminars.org/talk/gbgstats/6/
END:VEVENT
END:VCALENDAR
