BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Volkan Cevher (Laboratory for Information and Inference Systems 
 – LIONS\, EPFL)
DTSTART:20210930T160000Z
DTEND:20210930T170000Z
DTSTAMP:20260423T003236Z
UID:MPML/56
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/MPML/56/">Op
 timization Challenges in Adversarial Machine Learning</a>\nby Volkan Cevhe
 r (Laboratory for Information and Inference Systems – LIONS\, EPFL) as p
 art of Mathematics\, Physics and Machine Learning (IST\, Lisbon)\n\n\nAbst
 ract\nThanks to neural networks (NNs)\, faster computation\, and massive d
 atasets\, machine learning (ML) is under increasing pressure to provide au
 tomated solutions to even harder real-world tasks beyond human performance
  with ever faster response times due to potentially huge technological and
  societal benefits. Unsurprisingly\, the NN learning formulations present 
 a fundamental challenge to the back-end learning algorithms despite their 
 scalability\, in particular due to the existence of traps in the non-conve
 x optimization landscape\, such as saddle points\, that can prevent algori
 thms from obtaining “good” solutions.\n\nIn this talk\, we describe ou
 r recent research that has demonstrated that the non-convex optimization d
 ogma is false by showing that scalable stochastic optimization algorithms 
 can avoid traps and rapidly obtain locally optimal solutions. Coupled with
  the progress in representation learning\, such as over-parameterized neur
 al networks\, such local solutions can be globally optimal.\n\nUnfortunate
 ly\, this talk will also demonstrate that the central min-max optimization
  problems in ML\, such as generative adversarial networks (GANs)\, robust 
 reinforcement learning (RL)\, and\ndistributionally robust ML\, contain sp
 urious attractors that do not include any stationary points of the origina
 l learning formulation. Indeed\, we will describe how algorithms are subje
 ct to a grander challenge\, including unavoidable convergence failures\, w
 hich could explain the stagnation in their progress despite the impressive
  earlier demonstrations.\n
LOCATION:https://researchseminars.org/talk/MPML/56/
END:VEVENT
END:VCALENDAR
