BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Sashank Reddi (Google)
DTSTART:20200715T130000Z
DTEND:20200715T140000Z
DTSTAMP:20260423T035933Z
UID:FLOW/9
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/FLOW/9/">Ada
 ptive federated optimization</a>\nby Sashank Reddi (Google) as part of Fed
 erated Learning One World Seminar\n\n\nAbstract\nFederated learning is a d
 istributed machine learning paradigm in which a large number of clients co
 ordinate with a central server to learn a model without sharing their own 
 training data. Due to the heterogeneity of the client datasets\, standard 
 federated optimization methods such as Federated Averaging (FedAvg) are of
 ten difficult to tune and exhibit unfavorable convergence behavior. In non
 -federated settings\, adaptive optimization methods have had notable succe
 ss in combating such issues. In this work\, we propose federated versions 
 of adaptive optimizers\, including Adagrad\, Adam\, and Yogi\, and analyze
  their convergence in the presence of heterogeneous data for general nonco
 nvex settings. Our results highlight the interplay between client heteroge
 neity and communication efficiency. We also perform extensive experiments 
 on these methods and show that the use of adaptive optimizers can signific
 antly improve the performance of federated learning.\n
LOCATION:https://researchseminars.org/talk/FLOW/9/
END:VEVENT
END:VCALENDAR
