The least-control principle for learning at equilibrium

João Sacramento (ETH Zürich)

10-Nov-2022, 17:00-18:00 (17 months ago)

Abstract: A large number of models of interest in both neuroscience and machine learning can be expressed as dynamical systems at equilibrium. This class of systems includes deep neural networks, equilibrium recurrent neural networks, and meta-learning. In this talk I will present a new principle for learning equilibria with a temporally - and spatially - local rule. Our principle casts learning as a least-control problem, where we first introduce an optimal controller to lead the system towards a solution state, and then define learning as reducing the amount of control needed to reach such a state. We show that incorporating learning signals within a dynamics as an optimal control enables transmitting activity-dependent credit assignment information, avoids storing intermediate states in memory, and does not rely on infinitesimal learning signals. In practice, our principle leads to strong performance matching that of leading gradient-based learning methods when applied to an array of benchmarking experiments. Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.

data structures and algorithmsmachine learningmathematical physicsinformation theoryoptimization and controldata analysis, statistics and probability

Audience: researchers in the topic


Mathematics, Physics and Machine Learning (IST, Lisbon)

Series comments: To receive the series announcements, please register in:
mpml.tecnico.ulisboa.pt
mpml.tecnico.ulisboa.pt/registration
Zoom link: videoconf-colibri.zoom.us/j/91599759679

Organizers: Mário Figueiredo, Tiago Domingos, Francisco Melo, Jose Mourao*, Cláudia Nunes, Yasser Omar, Pedro Alexandre Santos, João Seixas, Cláudia Soares, João Xavier
*contact for this listing

Export talk to