Noise sensitivity/stability for deep Boolean neural nets

Johan Jonasson (Chalmers University and University of Gothenburg)

19-Jan-2023, 14:16-15:00 (15 months ago)

Abstract: A well-known and ubiquitous property of neural net classifiers is that they can be fooled into misclassifying some objects by changing the input in tiny ways that are indistinguishable for the human eye. These changes can be adversarial, but sometimes they can be just random noise. This makes it interesting to ask if this property is something that almost all neural nets have and, when they do, why that is. There are good heuristic explanations, but to prove mathematically rigorous results seems very difficult in general. Here we prove some first results on various toy models. We treat our questions within the framework of the established field of noise sensitivity/stability. What we prove can roughly be stated as:

  • A sufficiently deep fully connected network with sufficiently wide layers and iid Gaussian weights is noise sensitive, i.e. an arbitrarily small random noise makes the predicted class of a binary input string before and after the noise is added virtually independent. If one imposes correlations on the weights corresponding to the same input features, this still holds unless the correlation is very close to 1.
  • Neural nets consisting of only convolutional layers may or may not be noise sensitive and we present examples of both behaviours.

machine learningprobabilitystatistics theory

Audience: researchers in the topic


Gothenburg statistics seminar

Series comments: Gothenburg statistics seminar is open to the interested public, everybody is welcome. It usually takes place in MVL14 (http://maps.chalmers.se/#05137ad7-4d34-45e2-9d14-7f970517e2b60, see specific talk).

Organizers: Moritz Schauer*, Ottmar Cronie*
*contact for this listing

Export talk to