BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Andreas Bärmann/Kevin Aigner (FAU Erlangen-Nürnberg)
DTSTART:20210706T101500Z
DTEND:20210706T114500Z
DTSTAMP:20260423T022602Z
UID:MathDeep/12
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/MathDeep/12/
 ">Online Learning for Optimization Problems with Unknown or Uncertain Cost
  Functions</a>\nby Andreas Bärmann/Kevin Aigner (FAU Erlangen-Nürnberg) 
 as part of Mathematics of Deep Learning\n\n\nAbstract\nThe first part of t
 he talk begins by recapitulating several basic algorithms and results in o
 nline learning\, in particular the multiplicative weights method and onlin
 e gradient descent. Based on these algorithms\, we demonstrate how to lear
 n the objective function of a decision-maker while only observing the prob
 lem input data and the decision-maker’s corresponding decisions over mul
 tiple rounds. Our approach works for linear objectives over arbitrary feas
 ible sets for which we have a linear optimization oracle. The two exact al
 gorithms we present – based on multiplicative weights updates and online
  gradient descent respectively – converge at a rate of $O(1/\\sqrt T)$ a
 nd thus allow taking decisions which are essentially as good as those of t
 he observed decision-maker already after relatively few observations. We s
 how the effectiveness and possible applications of our methods in a broad 
 computational study. This is joint work with Alexander Martin\, Sebastian 
 Pokutta and Oskar Schneider.\n\nIn the second part of the talk\, we consid
 er the robust treatment of stochastic optimization problems involving rand
 om vectors with unknown discrete probability distributions. With this prob
 lem class\, we demonstrate the basic concepts of data-driven optimization 
 under uncertainty. Furthermore\, we introduce a new iterative approach tha
 t uses scenario observations to learn more about the uncertainty over time
 . This means our solutions become less and less conservative\, interpolati
 ng between distributionally robust and stochastic optimization. We achieve
  this by solving the distributionally robust optimization problem over tim
 e via an online-learning approach while iteratively updating the ambiguity
  sets. We provide a regret bound for the quality of the obtained solutions
  that converges at a rate of $O(\\log T/T)$ and illustrate the effectivene
 ss of our procedure by numerical experiments. Our proposed algorithm is ab
 le to solve the online learning problem significantly faster than equivale
 nt reformulations. This is joint work with Kristin Braun\, Frauke Liers\, 
 Sebastian Pokutta\, Oskar Schneider\, Kartikey Sharma and Sebastian Tschup
 pik.\n
LOCATION:https://researchseminars.org/talk/MathDeep/12/
END:VEVENT
END:VCALENDAR
