Deterministic and Stochastic Gradient Methods for Non-Smooth Non-Convex Regularized Optimization

Akiko Takeda (University of Tokyo)

29-Jul-2020, 07:00-08:00 (4 years ago)

Abstract: Our work focuses on deterministic/stochastic gradient methods for optimizing a smooth non-convex loss function with a non-smooth non-convex regularizer. Research on stochastic gradient methods is quite limited, and until recently no non-asymptotic convergence results have been reported. After showing a deterministic approach, we present simple stochastic gradient algorithms, for finite-sum and general stochastic optimization problems, which have superior convergence complexities compared to the current state-of-the-art. We also compare our algorithms’ performance in practice for empirical risk minimization.

This is based on joint works with Tianxiang Liu, Ting Kei Pong and Michael R. Metel.

optimization and control

Audience: researchers in the topic


Variational Analysis and Optimisation Webinar

Series comments: Register on www.mocao.org/va-webinar/ to receive information about the zoom connection.

Organizers: Hoa Bui*, Matthew Tam*, Minh Dao, Alex Kruger, Vera Roshchina*, Guoyin Li
*contact for this listing

Export talk to