Towards Weaker Variance Assumptions for Stochastic Optimization: A Blast From the Past

Ahmet Alacaoglu (UBC)

Tue Mar 3, 23:30-00:30 (8 days ago)

Abstract: In this talk, I will present some recent advances for analyzing stochastic optimization methods without the bounded variance assumption. It is well-known that the bounded variance assumption is violated for even the most standard problems such as linear least squares problem. We will see that the analysis for obtaining optimal rates of convergence under realistic variance assumptions builds on a connection between the classical literatures for stochastic approximation and the Halpern iteration for solving fixed-point problems. We will discuss the extensions to proximal algorithms for solving regularized problems and stochastic convex nonlinear programs, as well as the required ideas for getting rate guarantees on the last iterate of the algorithm, which is widely used in practice.

Mathematics

Audience: researchers in the topic


PIMS-CORDS SFU Operations Research Seminar

Organizer: Tamon Stephen*
*contact for this listing

Export talk to