On the outsized importance of learning rates in local update methods

Jakub Konečný (Google)

08-Jul-2020, 13:00-14:00 (5 years ago)

Abstract: In this work, we study a family of algorithms, which we refer to as local update methods, that generalize many federated learning and meta-learning algorithms. We prove that for quadratic objectives, local update methods perform stochastic gradient descent on a surrogate loss function which we exactly characterize. We show that the choice of client learning rate controls the condition number of that surrogate loss, as well as the distance between the minimizers of the surrogate and true loss functions. We use this theory to derive novel convergence rates for federated averaging that showcase this trade-off between the condition number of the surrogate loss and its alignment with the true loss function. We validate our results empirically, showing that in communication-limited settings, proper learning rate tuning is often sufficient to reach near-optimal behavior. We also present a practical method for automatic learning rate decay in local update methods that helps reduce the need for learning rate tuning, and highlight its empirical performance on a variety of tasks and datasets.

optimization and control

Audience: researchers in the topic


Federated Learning One World Seminar

Series comments: Description: Research seminar on federated learning and related fields

Please register for the seminar on the website here (https://sites.google.com/view/one-world-seminar-series-flow/register#h.eoftjj4xztpb). Prior to the beginning of the seminar, a Zoom link with a password will be sent to the e-mail addresses of the people who have registered to be included in the mailing list.

Organizers: Peter Richtárik, Virgina Smith, Aurélien Bellet, Dan Alistarh
Curator: Ahmed Khaled*
*contact for this listing

Export talk to