BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Anima Anandkumar (Caltech)
DTSTART:20200709T190000Z
DTEND:20200709T203000Z
DTSTAMP:20260423T021057Z
UID:IASML/10
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/IASML/10/">R
 ole of Interaction in Competitive Optimization</a>\nby Anima Anandkumar (C
 altech) as part of IAS Seminar Series on Theoretical Machine Learning\n\n\
 nAbstract\nCompetitive optimization is needed for many ML problems such as
  training GANs\, robust reinforcement learning\, and adversarial learning.
  Standard approaches to competitive optimization involve each agent indepe
 ndently optimizing their objective functions using SGD or other gradient-b
 ased approaches. However\, they suffer from oscillations and instability\,
  since the optimization does not account for interaction among the players
 . We introduce competitive gradient descent (CGD) that explicitly incorpor
 ates interaction by solving for Nash equilibrium of a local game. We exten
 d CGD to competitive mirror descent (CMD) for solving conically constraine
 d competitive problems by using the dual geometry induced by a Bregman div
 ergence.\n\nWe demonstrate the effectiveness of our approach for training 
 GANs and solving constrained reinforcement learning (RL) problems. We also
  derive a competitive policy optimization method to train RL agents in com
 petitive games. Finally\, we provide a novel perspective on training GANs 
 by pointing out the "GAN-dilemma" a fundamental flaw of the divergence-min
 imization perspective on GANs. Instead\, we argue that an implicit competi
 tive regularization due to simultaneous training methods\, such as CGD\, i
 s a crucial mechanism behind GAN performance.\n
LOCATION:https://researchseminars.org/talk/IASML/10/
END:VEVENT
END:VCALENDAR
