Generalizable Adversarial Robustness to Unforeseen Attacks

Soheil Feizi (University of Maryland College Park)

23-Jun-2020, 16:30-17:45 (5 years ago)

Abstract: In the last couple of years, a lot of progress has been made to enhance robustness of models against adversarial attacks. However, two major shortcomings still remain: (i) practical defenses are often vulnerable against strong “adaptive” attack algorithms, and (ii) current defenses have poor generalization to “unforeseen” attack threat models (the ones not used in training).

In this talk, I will present our recent results to tackle these issues. I will first discuss generalizability of a class of provable defenses based on randomized smoothing to various Lp and non-Lp attack models. Then, I will present adversarial attacks and defenses for a novel “perceptual” adversarial threat model. Remarkably, the defense against perceptual threat model generalizes well against many types of unforeseen Lp and non-Lp adversarial attacks.

This talk is based on joint works with Alex Levine, Sahil Singla, Cassidy Laidlaw, Aounon Kumar and Tom Goldstein.

bioinformaticsgame theoryinformation theorymachine learningneural and evolutionary computingclassical analysis and ODEsoptimization and controlstatistics theory

Audience: researchers in the topic


IAS Seminar Series on Theoretical Machine Learning

Series comments: Description: Seminar series focusing on machine learning. Open to all.

Register in advance at forms.gle/KRz8hexzxa5P4USr7 to receive Zoom link and password. Recordings of past seminars can be found at www.ias.edu/video-tags/seminar-theoretical-machine-learning

Organizers: Ke Li*, Sanjeev Arora
*contact for this listing

Export talk to