Adaptive federated optimization
Sashank Reddi (Google)
Abstract: Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model without sharing their own training data. Due to the heterogeneity of the client datasets, standard federated optimization methods such as Federated Averaging (FedAvg) are often difficult to tune and exhibit unfavorable convergence behavior. In non-federated settings, adaptive optimization methods have had notable success in combating such issues. In this work, we propose federated versions of adaptive optimizers, including Adagrad, Adam, and Yogi, and analyze their convergence in the presence of heterogeneous data for general nonconvex settings. Our results highlight the interplay between client heterogeneity and communication efficiency. We also perform extensive experiments on these methods and show that the use of adaptive optimizers can significantly improve the performance of federated learning.
optimization and control
Audience: researchers in the topic
Federated Learning One World Seminar
Series comments: Description: Research seminar on federated learning and related fields
Please register for the seminar on the website here (https://sites.google.com/view/one-world-seminar-series-flow/register#h.eoftjj4xztpb). Prior to the beginning of the seminar, a Zoom link with a password will be sent to the e-mail addresses of the people who have registered to be included in the mailing list.
| Organizers: | Peter Richtárik, Virgina Smith, Aurélien Bellet, Dan Alistarh |
| Curator: | Ahmed Khaled* |
| *contact for this listing |
