Derivatives of Solutions of Saddle-Point Problems

Antonin Chambolle (CMAP / École Polytechnique Palaiseau)

01-Mar-2021, 14:30-15:30 (3 years ago)

Abstract: In a recent paper, we have been interested in optimizing the quality of the solutions of convex optimization problems among a class of consistent approximations of the total variation. Such a problem requires an efficient way to derivate a loss function with respect to the solution of a convex problem, computed by an iterative algorithm for which classical back-propagation is not always possible, due to memory limitation. We will describe in this talk a simple way to compute the adjoint states which allows to estimate such gradients, and discuss issues relative to the smoothness of the objective.

Joint work with T. Pock (TU Graz).

optimization and control

Audience: learners

Comments: The address and password of the zoom room of the seminar are sent by e-mail on the mailinglist of the seminar one day before each talk


One World Optimization seminar

Series comments: Description: Online seminar on optimization and related areas

The address and password of the zoom room of the seminar are sent by e-mail on the mailinglist of the seminar one day before each talk

Organizers: Sorin-Mihai Grad*, Radu Ioan Boț, Shoham Sabach, Mathias Staudigl
*contact for this listing

Export talk to