BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Usman Khan (Tufts University)
DTSTART:20210709T130000Z
DTEND:20210709T140000Z
DTSTAMP:20260423T021000Z
UID:MPML/51
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/MPML/51/">Di
 stributed ML: Optimal algorithms for distributed stochastic non-convex opt
 imization</a>\nby Usman Khan (Tufts University) as part of Mathematics\, P
 hysics and Machine Learning (IST\, Lisbon)\n\n\nAbstract\nIn many emerging
  applications\, it is of paramount interest to learn hidden parameters fro
 m data. For example\, self-driving cars may use onboard cameras to identif
 y pedestrians\, highway lanes\, or traffic signs in various light and weat
 her conditions. Problems such as these can be framed as classification\, r
 egression\, or risk minimization in general\, at the heart of which lies s
 tochastic optimization and machine learning. In many practical scenarios\,
  distributed and decentralized learning methods are preferable as they ben
 efit from a divide-and-conquer approach towards data at the expense of loc
 al (short-range) communication. In this talk\, I will present our recent w
 ork that develops a novel algorithmic framework to address various aspects
  of decentralized stochastic first-order optimization methods for non-conv
 ex problems. A major focus will be to characterize regimes where decentral
 ized solutions outperform their centralized counterparts and lead to optim
 al convergence guarantees. Moreover\, I will characterize certain desirabl
 e attributes of decentralized methods in the context of linear speedup and
  networkindependent convergence rates. Throughout the talk\, I will demons
 trate such key aspects of the proposed methods with the help of provable t
 heoretical results and numerical experiments on real data.\n
LOCATION:https://researchseminars.org/talk/MPML/51/
END:VEVENT
END:VCALENDAR
