Building (and breaking) neural networks that think fast and slow

Tom Goldstein (University of Maryland)

17-Nov-2022, 17:00-18:00 (17 months ago)

Abstract: Most neural networks are built to solve simple patternmatching tasks, a process that is often known as “fast” thinking. In this talk, I’ll use adversarial methods to explore the robustness of neural networks. I’ll also discuss whether vulnerabilities of AI systems that have been observed in academic labs can pose real security threats to industrial systems. Then, I’ll present methods for constructing neural networks that exhibit “slow” thinking abilities akin to human logical reasoning. Rather than learning simple pattern matching rules, these networks have the ability to synthesize algorithmic reasoning processes and solve difficult discrete search and planning problems that cannot be solved by conventional AI systems. Interestingly, these reasoning systems naturally exhibit error correction and robustness properties that make them more difficult to break than their fast thinking counterparts.

data structures and algorithmsmachine learningmathematical physicsinformation theoryoptimization and controldata analysis, statistics and probability

Audience: researchers in the topic


Mathematics, Physics and Machine Learning (IST, Lisbon)

Series comments: To receive the series announcements, please register in:
mpml.tecnico.ulisboa.pt
mpml.tecnico.ulisboa.pt/registration
Zoom link: videoconf-colibri.zoom.us/j/91599759679

Organizers: Mário Figueiredo, Tiago Domingos, Francisco Melo, Jose Mourao*, Cláudia Nunes, Yasser Omar, Pedro Alexandre Santos, João Seixas, Cláudia Soares, João Xavier
*contact for this listing

Export talk to