BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Jonathan Eckstein (Rutgers University)
DTSTART:20210222T143000Z
DTEND:20210222T153000Z
DTSTAMP:20260423T035056Z
UID:OWOS/37
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OWOS/37/">Pr
 ogressive Hedging and Asynchronous Projective Hedging for Convex Stochasti
 c Programming</a>\nby Jonathan Eckstein (Rutgers University) as part of On
 e World Optimization seminar\n\n\nAbstract\nOperator splitting methods for
  convex optimization and monotone inclusions have their roots in the solut
 ion of partial differential equations\, and have since become popular in m
 achine learning and image processing applications.  Their application to "
 operations-research-style" optimization problems has been somewhat limited
 .\n\nA notable exception is their application to stochastic programming.  
 In a paper published in 1991\, Rockafellar and Wets proposed the progressi
 ve hedging (PH) algorithm to solve large-scale convex stochastic programmi
 ng problems.  Although they proved the convergence of the method from firs
 t principles\, it was already known to them that PH was an operator splitt
 ing method.\n\nThis talk will present a framework for convex stochastic pr
 ogramming and show that applying the ADMM (and thus Douglas-Rachford split
 ting) to it yields the PH algorithm.  The equivalence of PH to ADMM has lo
 ng been known but not explicitly published.\n\nNext\, the talk will apply 
 the projective splitting framework of Combettes and Eckstein to the same f
 ormulation\, yielding a method which is similar to PH but can be implement
 ed in a partially aynchronous manner.  We call this method "asynchronous p
 rojective hedging" (APH). Unlike most decomposition methods\, it does not 
 need to solve every subproblem at every iteration\; instead\, each iterati
 on may solve just a single subproblem or a small subset of the available s
 ubproblems.\n\nFinally\, the talk will describe work integrating the APH a
 lgorithm into mpi-sppy\, a Python package for modeling and distributed par
 allel solution of stochastic programming problems. Mpi-sppy uses the Pyomo
  Python-based optimization modeling sytem.  Our experience includes using 
 8\,000 processor cores to solve a test problem instance with 1\,000\,000 s
 cenarios.\n\nThis talk presents joint research with Jean-Paul Watson (Lawr
 ence Livermore National Laboratory\, USA)\, and David Woodruff (University
  of California\, Davis).\n\nThe address and password of the zoom room of t
 he seminar are sent by e-mail on the mailinglist of the seminar one day be
 fore each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/37/
END:VEVENT
END:VCALENDAR
