Path integral control theory

Hilbert Johan Kappen (Donder Institute, Radboud University Nijmegen, the Netherlands)

28-May-2020, 16:30-17:30 (4 years ago)

Abstract: Stochastic optimal control theory deals with the problem to compute an optimal set of actions to attain some future goal. Examples are found in many contexts such as motor control tasks for robotics, planning and scheduling tasks or managing a financial portfolio. The computation of the optimal control is typically very difficult due to the size of the state space and the stochastic nature of the problem. Special cases for which the computation is tractable are linear dynamical systems with quadratic cost and deterministic control problems. For a special class of non-linear stochastic control problems, the solution can be mapped onto a statistical inference problem. For these so-called path integral control problems the optimal cost-to-go solution of the Bellman equation is given by the minimum of a free energy. I will give a high level introduction to the underlying theory and illustrate with some examples from robotics and other areas.

data structures and algorithmsmachine learningmathematical physicsinformation theoryoptimization and controldata analysis, statistics and probability

Audience: researchers in the topic

( video )


Mathematics, Physics and Machine Learning (IST, Lisbon)

Series comments: To receive the series announcements, please register in:
mpml.tecnico.ulisboa.pt
mpml.tecnico.ulisboa.pt/registration
Zoom link: videoconf-colibri.zoom.us/j/91599759679

Organizers: Mário Figueiredo, Tiago Domingos, Francisco Melo, Jose Mourao*, Cláudia Nunes, Yasser Omar, Pedro Alexandre Santos, João Seixas, Cláudia Soares, João Xavier
*contact for this listing

Export talk to