BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Marc Teboulle (Tel Aviv University)
DTSTART;VALUE=DATE-TIME:20200420T130000Z
DTEND;VALUE=DATE-TIME:20200420T140000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/1
DESCRIPTION:Title: Hid
den Convexity in Nonconvex Quadratic Optimization\nby Marc Teboulle (T
el Aviv University) as part of One World Optimization seminar\n\n\nAbstrac
t\nthe address and password of the zoom room of the seminar are sent by e-
mail on the mailinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alexandre d'Aspremont (École Normale Supérieure Paris (ENS))
DTSTART;VALUE=DATE-TIME:20200615T130000Z
DTEND;VALUE=DATE-TIME:20200615T140000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/2
DESCRIPTION:Title: Nai
ve feature selection: Sparsity in naive Bayes\nby Alexandre d'Aspremon
t (École Normale Supérieure Paris (ENS)) as part of One World Optimizati
on seminar\n\n\nAbstract\nDue to its linear complexity\, naive Bayes class
ification remains an attractive supervised learning method\, especially in
very large-scale settings. We propose a sparse version of naive Bayes\, w
hich can be used for feature selection. This leads to a combinatorial maxi
mum-likelihood problem\, for which we provide an exact solution in the cas
e of binary data\, or a bound in the multinomial case. We prove that our b
ound becomes tight as the marginal contribution of additional features dec
reases. Both binary and multinomial sparse models are solvable in time alm
ost linear in problem size\, representing a very small extra relative cost
compared to the classical naive Bayes. Numerical experiments on text data
show that the naive Bayes feature selection method is as statistically ef
fective as state-of-the-art feature selection methods such as recursive fe
ature elimination\, l1-penalized logistic regression and LASSO\, while bei
ng orders of magnitude faster. For a large data set\, having more than wit
h 1.6 million training points and about 12 million features\, and with a n
on-optimized CPU implementation\, our sparse naive Bayes model can be trai
ned in less than 15 seconds.\n\nThe talk is based on joint work with Armin
Askari and Laurent El Ghaoui that can be found at https://arxiv.org/abs/1
905.09884\n\nthe address and password of the zoom room of the seminar are
sent by e-mail on the mailinglist of the seminar one day before each talk\
n
LOCATION:https://researchseminars.org/talk/OWOS/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Peter Richtárik (KAUST)
DTSTART;VALUE=DATE-TIME:20200427T130000Z
DTEND;VALUE=DATE-TIME:20200427T140000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/3
DESCRIPTION:Title: On
Second Order Methods and Randomness\nby Peter Richtárik (KAUST) as pa
rt of One World Optimization seminar\n\n\nAbstract\nthe address and passwo
rd of the zoom room of the seminar are sent by e-mail on the mailinglist o
f the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Russell Luke (University of Göttingen)
DTSTART;VALUE=DATE-TIME:20200504T130000Z
DTEND;VALUE=DATE-TIME:20200504T140000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/4
DESCRIPTION:Title: Ite
rated self-mappings in nonlinear spaces: the case of random function iter
ations and inconsistent stochastic feasibility\nby Russell Luke (Unive
rsity of Göttingen) as part of One World Optimization seminar\n\n\nAbstra
ct\nthe address and password of the zoom room of the seminar are sent by e
-mail on the mailinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wotao Yin (UCLA)
DTSTART;VALUE=DATE-TIME:20200511T130000Z
DTEND;VALUE=DATE-TIME:20200511T140000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/5
DESCRIPTION:Title: Sca
led relative graph\nby Wotao Yin (UCLA) as part of One World Optimizat
ion seminar\n\n\nAbstract\nthe address and password of the zoom room of th
e seminar are sent by e-mail on the mailinglist of the seminar one day bef
ore each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Francis Bach (INRIA)
DTSTART;VALUE=DATE-TIME:20200518T130000Z
DTEND;VALUE=DATE-TIME:20200518T140000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/6
DESCRIPTION:Title: On
the convergence of gradient descent for wide two-layer neural networks
\nby Francis Bach (INRIA) as part of One World Optimization seminar\n\n\nA
bstract\n[joint talk with the One World Seminar: Mathematical Methods for
Arbitrary Data Sources]\n\nthe address and password of the zoom room of th
e seminar are sent by e-mail on the mailinglist of the seminar one day bef
ore each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Panayotis Mertikopoulos (CNRS / INRIA)
DTSTART;VALUE=DATE-TIME:20200713T130000Z
DTEND;VALUE=DATE-TIME:20200713T140000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/7
DESCRIPTION:Title: Gam
es\, dynamics and optimization\nby Panayotis Mertikopoulos (CNRS / INR
IA) as part of One World Optimization seminar\n\n\nAbstract\nThis talk aim
s to survey the triple-point interface between optimization\, game theory\
, and dynamical systems. In the first part of the talk\, we will discuss h
ow the ordinary differential equation (ODE) method of stochastic approxima
tion can be used to analyze the trajectories of stochastic first-order alg
orithms in non-convex programs – both in terms of convergence to the pro
blem's critical set as well as the avoidance of non-minimizing critical ma
nifolds. Subsequently\, we will examine the behavior of these algorithms i
n a game-theoretic context involving \\emph{several} optimizing agents\, e
ach with their individual objective. In this multi-agent setting\, the sit
uation is considerably more involved: On the one hand\, if the game being
played satisfies a monotonicity condition known as "diagonal strict convex
ity" (Rosen\, Econometrica\, 1965)\, the induced sequence of play converge
s to Nash equilibrium with probability $1$. On the other hand\, in non-mon
otone games\, the sequence of play may converge with arbitrarily high prob
ability to spurious attractors that are in no way unilaterally stable (or
even stationary). "Traps" of this type can arise even in simple two-player
zero-sum games with one-dimensional action sets and polynomial payoffs\,
a fact which highlights the fundamental gap between min-min and min-max pr
oblems.\n\nWe will discuss both classical and recent results – but not t
he proofs thereof.\n\n[joint talk with the One World Mathematical Game The
ory Seminar]\n\nthe address and password of the zoom room of the seminar a
re sent by e-mail on the mailinglist of the seminar one day before each ta
lk\n
LOCATION:https://researchseminars.org/talk/OWOS/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Patrick L. Combettes (North Carolina State University)
DTSTART;VALUE=DATE-TIME:20200525T130000Z
DTEND;VALUE=DATE-TIME:20200525T140000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/8
DESCRIPTION:Title: Bac
k to single-resolvent Iterations\, with warping\nby Patrick L. Combett
es (North Carolina State University) as part of One World Optimization sem
inar\n\n\nAbstract\nThe scope of the classical proximal point algorithm fo
r finding a zero of a monotone operator may seem rather limited. For this
reason\, the field of operator splitting has moved away from single-resolv
ent iterations and significantly expanded in various directions. We introd
uce a generalization of the standard resolvent\, called warped resolvent\,
which is constructed with the help of an auxiliary operator. This notion
will be shown to be a central tool which not only underlies a broad range
of existing algorithms\, but also serves as a platform to design new class
es of splitting methods. The discussion will include Bregman-based splitti
ng in reflexive spaces\, primal-dual methods\, inertial methods\, systems
of monotone inclusions\, and best approximation methods.\n\nBased on prepr
ints and on-going work with M. N. Bui.\n\nthe address and password of the
zoom room of the seminar are sent by e-mail on the mailinglist of the semi
nar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Adrien Taylor (INRIA)
DTSTART;VALUE=DATE-TIME:20200601T130000Z
DTEND;VALUE=DATE-TIME:20200601T140000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/9
DESCRIPTION:Title: Com
puter-aided worst-case analyses and design of first-order methods for conv
ex optimization\nby Adrien Taylor (INRIA) as part of One World Optimiz
ation seminar\n\n\nAbstract\nIn this presentation\, I want to provide a hi
gh-level overview of recent approaches for analyzing and designing first-o
rder methods using symbolic computations and/or semidefinite programming.
A particular emphasis will be given to the "performance estimation" approa
ch and some of its variants\, which enjoys comfortable tightness guarantee
s: the approach fails only when the target results are impossible to prove
. In particular\, it allows obtaining (tight) worst-case guarantees for fi
xed-step first-order methods involving a variety of oracles - that include
s explicit\, projected\, proximal\, conditional\, inexact\, or stochastic
(sub)gradient steps - and a variety of convergence measures.\n\nThe presen
tation will be example-based\, as the main ingredients necessary for under
standing the methodologies are already present in the analysis base optimi
zation schemes. For convincing the audience\, and if time allows\, we will
provide other examples that include analyses of the Douglas-Rachford spli
tting\, and of a variant of the celebrated conjugate gradient method in it
s most naive form.\n\nThe methodology is implemented within the package "P
ESTO" (for "Performance EStimation TOolbox"\, available at https://github.
com/AdrienTaylor/Performance-Estimation-Toolbox)\, which allows using the
framework without the SDP modelling steps.\n\nThis talk is based on joint
works with great collaborators (who will be mentioned during the presentat
ion).\n\nthe address and password of the zoom room of the seminar are sent
by e-mail on the mailinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Volkan Cevher (EPFL)
DTSTART;VALUE=DATE-TIME:20200608T130000Z
DTEND;VALUE=DATE-TIME:20200608T140000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/10
DESCRIPTION:Title: Sc
alable semidefinite programming\nby Volkan Cevher (EPFL) as part of On
e World Optimization seminar\n\n\nAbstract\nThis talk first introduces new
convex optimization methods based on linear minimization oracles to obtai
n numerical solutions to semidefinite programs with a low-rank matrix stre
aming model. This streaming model provides us an opportunity to integrate\
nsketching as a new tool for developing storage optimal convex optimizatio
n methods that can solve semidefinite programs (SDP) efficiently within sp
ace required to write down the problem and its solution.\n\nIn particular\
, for SDP formulations\, we obtain an approximate solution within an $\\ep
silon$-error region in the objective residual and distance to feasible set
\, after a total of $\\texttt{Const}\\cdot \\epsilon^{-5/2}\\log(n/\\epsil
on)$ matrix vector multiplications for the linear minimization oracle (app
roximate eigenvalue calculation)\, and an additional $\\mathcal{O}(\\max(n
\,d)/\\epsilon^2)$ arithmetic operations for the remaining arithmetics. $\
\texttt{Const}$ is problem independent.\n\nWe then discuss a practical ine
xact augmented Lagrangian method for non-convex problems with nonlinear co
nstraints and contrast this approach to the convex one for solving SDPs. W
e characterize the total computational complexity of the non-convex method
subject to a verifiable geometric condition\, followed by numerical demon
strations that include\, max-cut\, unsupervised clustering\, and quadratic
assignment problems. \n\nThe talk is based on joint work with several col
laborators\, including Alp Yurtsever\, Olivier Fercoq\, Joel A. Tropp\, Ma
deleine Udell\, Fatih Sahin\, Armin Eftekhari\, and Ahmet Alacaoglu.\n\nth
e address and password of the zoom room of the seminar are sent by e-mail
on the mailinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Claudia Sagastizábal (University of Campinas)
DTSTART;VALUE=DATE-TIME:20200622T130000Z
DTEND;VALUE=DATE-TIME:20200622T140000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/11
DESCRIPTION:Title: Re
visiting Augmented Lagrangian Duals\nby Claudia Sagastizábal (Univers
ity of Campinas) as part of One World Optimization seminar\n\n\nAbstract\n
For nonconvex optimization problems\, possibly having mixed-integer variab
les\, a convergent primal-dual solution algorithm is proposed. The approac
h applies a proximal bundle method to a certain augmented Lagrangian dual
that arises in the context of the so-called generalized augmented Lagrangi
ans. We recast these Lagrangians into the framework of a classical Lagrang
ian\, by means of a special reformulation of the original problem. Thanks
to this insight\, the methodology yields zero duality gap. Lagrangian subp
roblems can be solved inexactly without hindering the primal-dual converge
nce properties of the algorithm. Primal convergence is ensured even when t
he dual solution set is empty. The interest of the new method is assessed
on several problems\, including unit commitment in energy optimization. Th
ese problems are solved to optimality by solving separable Lagrangian subp
roblems. \n\nThe talk is based on joint work with Marcelo Cordova and Weli
ngton de Oliveira.\n\nthe address and password of the zoom room of the sem
inar are sent by e-mail on the mailinglist of the seminar one day before e
ach talk\n
LOCATION:https://researchseminars.org/talk/OWOS/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yurii Nesterov (University of Louvain)
DTSTART;VALUE=DATE-TIME:20200629T130000Z
DTEND;VALUE=DATE-TIME:20200629T140000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/12
DESCRIPTION:Title: Su
perfast Second-Order Methods for Unconstrained Convex Optimization\nby
Yurii Nesterov (University of Louvain) as part of One World Optimization
seminar\n\n\nAbstract\nIn this talk\, we present new second-order methods
with convergence rate $O( 1/k^4)$\, where $k$ is the iteration counter. Th
is is faster that the existing lower bound for this type of schemes\, whic
h is $O ( 1/k^{7/2} )$. Our progress can be explained by a finer specifica
tion of the problem class. The main idea of this approach consists in impl
ementation of an accelerated third-order scheme using a second-order oracl
e. At each iteration of our method\, we solve a nontrivial auxiliary probl
em by a linearly convergent scheme based on the relative non-degeneracy co
ndition. During this process\, the Hessian of the objective function is co
mputed once\, and the gradient is computed $O (\\ln {1 \\over \\epsilon})$
times\, where $\\epsilon$ is the desired accuracy of the solution for our
problem.\n\nthe address and password of the zoom room of the seminar are
sent by e-mail on the mailinglist of the seminar one day before each talk\
n
LOCATION:https://researchseminars.org/talk/OWOS/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jérôme Bolte (Toulouse 1 University Capitole)
DTSTART;VALUE=DATE-TIME:20200706T130000Z
DTEND;VALUE=DATE-TIME:20200706T140000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/13
DESCRIPTION:Title: A
Variational Model for Automatic Differentiation with Applications to Deep
Learning\nby Jérôme Bolte (Toulouse 1 University Capitole) as part o
f One World Optimization seminar\n\n\nAbstract\nAutomatic differentiation
is an automatized implementation of differential calculus\, it plays a key
computational role in several fields as machine learning\, design optimiz
ation\, fluid dynamics\, physical modeling\, mechanics\, finance. It is al
so efficient for nonsmooth problems despite the occurence of spurious beha
viors. In that case\, one indeed observes the apparition of calculus artif
acts and artificial critical points that have no variational nature. Our
goal is to provide a simple mathematical model for this differentiation pr
ocess. Our motivation comes from deep learning which will also serve as an
illustrative model for our ideas and results.\nThe first easy\, but someh
ow unexpected fact\, is that there is no\n«subdifferentiation» operator
modeling nonsmooth nonconvex automatic differentiation. This fact motivate
s the introduction of a family of multivalued mappings generalizing gradie
nt-like behaviors that we call conservative fields. We shall review their
salient properties and show how they allow us to study rigorously forward
and backward automatic differentiation. We will also try to clarify the s
purious behavior of automatic differentiation and study the role of what w
e call «artificial critical points». We apply our findings to show that
the training of feedforward neural networks through mini-batch stochastic
«subgradient» methods comes with rigorous convergence guarantees.\nJoint
work with E. Pauwels\n\nthe address and password of the zoom room of the
seminar are sent by e-mail on the mailinglist of the seminar one day befor
e each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Xiaoming Yuan (University of Hong Kong)
DTSTART;VALUE=DATE-TIME:20200720T130000Z
DTEND;VALUE=DATE-TIME:20200720T140000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/14
DESCRIPTION:Title: Fr
om Optimization to Optimal Control: An Algorithmic Design Perspective\
nby Xiaoming Yuan (University of Hong Kong) as part of One World Optimizat
ion seminar\n\n\nAbstract\nOptimal control problems model the procedures o
f controlling some\nphysical processes with certain objectives\; usually t
hey are modeled as\noptimization problems with PDE and other constraints.
It is generally\nnontrivial to find efficient numerical solvers for these
problems\,\nespecially for time-dependent cases. Typical difficulties incl
ude the\nextremely high dimensionality after discretization\, ill-conditio
ned\nmatrices of the resulting systems of linear equations\, and possibly\
ncomplicated coupling of PDEs with some other simple constraints. We will\
nshow how to extend some well-developed efficient operator splitting\nalgo
rithms in the context of convex optimization problems to some elliptic\nan
d parabolic optimal control problems. Particularly\, we will highlight som
e\ncomputational techniques such as preconditioning to derive trustworthy\
nnumerical schemes for various optimal control problems.\n\nthe address an
d password of the zoom room of the seminar are sent by e-mail on the maili
nglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Coralia Carțiș (University of Oxford)
DTSTART;VALUE=DATE-TIME:20200727T130000Z
DTEND;VALUE=DATE-TIME:20200727T140000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/15
DESCRIPTION:Title: Te
nsor Methods for Non-convex Optimization Problems\nby Coralia Carțiș
(University of Oxford) as part of One World Optimization seminar\n\n\nAbs
tract\nWe investigate the evaluation complexity of finding high-order crit
ical points of non-convex smooth optimization problems when high order der
ivatives of the objective are available.\nAdaptive regularisation (and tim
e permitting trust region) methods are presented that use high-degree Tayl
or local models in a simple framework. Their global rates of convergence t
o\nstandard notions of first\, second and third order critical points of t
he objective are presented\, and observed to be natural generalisations of
the optimal bounds known for cubic regularisation.\nHowever\, going beyon
d third-order criticality is challenging\, requiring new notions of (appro
ximate) high-order optimality. A strong\, stable notion of a high-order lo
cal minimizer is presented\,\nalong with associated regularisation and tru
st-region variants that can find such points in a quantifiable way from a
complexity viewpoint. Extensions of these methods and results to composite
\noptimization\, as well as to special structure functions (such as those
satisfying the PL inequality) may also be discussed\, time permitting. Thi
s work is joint with Nick Gould (Rutherford Appleton\nLaboratory\, UK) and
Philippe Toint (University of Namur\, Belgium).\n\nthe address and passwo
rd of the zoom room of the seminar are sent by e-mail on the mailinglist o
f the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Amir Beck (Tel-Aviv University)
DTSTART;VALUE=DATE-TIME:20200907T133000Z
DTEND;VALUE=DATE-TIME:20200907T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/16
DESCRIPTION:Title: Du
al Randomized Coordinate Descent Method for Solving a Class of Nonconvex P
roblems\nby Amir Beck (Tel-Aviv University) as part of One World Optim
ization seminar\n\n\nAbstract\nWe consider a nonconvex optimization proble
m consisting of maximizing the difference of two convex functions. We pres
ent a randomized method that requires low computational effort at each ite
ration. The described method is a randomized coordinate descent method emp
loyed on the so-called Toland-dual problem. We prove subsequence convergen
ce to dual stationarity points\, a new notion that we introduce and shown
to be tighter than the standard criticality. Almost sure rate of convergen
ce of an optimality measure of the dual sequence is proven. We demonstrat
e the potential of our results on three Principal Component Analysis (PCA)
models resulting in extremely simple algorithms\nJoint work Marc Teboulle
\n\nthe address and password of the zoom room of the seminar are sent by e
-mail on the mailinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jong-Shi Pang (University of Southern California)
DTSTART;VALUE=DATE-TIME:20200914T133000Z
DTEND;VALUE=DATE-TIME:20200914T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/17
DESCRIPTION:Title: Th
e Era of "Non"-Optimization Problems\nby Jong-Shi Pang (University of
Southern California) as part of One World Optimization seminar\n\n\nAbstra
ct\nThis talk presents a systematic discussion of our comprehensive effort
s in the study of modern “non”-optimization problems from the combined
perspectives of motivation\, theory\, and algorithms. The study is docum
ented in a forthcoming 700-page research monograph with the title: “Mode
rn Nonconvex Nondifferentiable Optimization”\, jointly authored by the s
peaker and Dr. Ying Cui at the University of Minnesota. Beginning with a 1
00-page of prerequisite mathematics and optimization background\, the mono
graph introduces the combined paradigm of structured learning and computat
ional optimization\, with illustrations drawn from contemporary problems i
n statistical estimation\, operations research\, optimization\, and their
diverse subfields. The goal of this monograph is multi-fold: \na) place th
e foundational and algorithmic treatment of nonconvexity and nondifferenti
ability on a rigorous footing\, focusing in particular on problems where t
hese two features are coupled\; \nb) provide the basic concepts and powerf
ul tools of nonsmooth analysis for this purpose\; \nc) present a host of s
urrogation algorithms with convergence guarantees for computing stationary
solutions of the appropriate kind\; \nd) understand the roles of these co
mputed solutions in the context of the source problems\, and \ne) set a fo
rward path for advanced research and to reach out to extended problems suc
h as nonconvex noncooperative games\, nonconvex stochastic programs\, and
robustification of nonconvex problems.\nIn short\, our efforts aim to put
in action the monumental treatise of Rockafellar and Wets on Variational A
nalysis\, open it up for practical applications\, and cement its sustained
contributions in the era of “non”-optimization problems.\n\nthe addre
ss and password of the zoom room of the seminar are sent by e-mail on the
mailinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Adrian Lewis (ORIE Cornell)
DTSTART;VALUE=DATE-TIME:20200921T133000Z
DTEND;VALUE=DATE-TIME:20200921T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/18
DESCRIPTION:Title: Sm
oothness in Nonsmooth Optimization\nby Adrian Lewis (ORIE Cornell) as
part of One World Optimization seminar\n\n\nAbstract\nFast black-box nonsm
ooth optimization\, while theoretically out of reach in the worst case\, h
as long been an intriguing goal in practice. Generic concrete nonsmooth o
bjectives are "partly" smooth: their subdifferentials have locally smooth
graphs with powerful constant-rank properties\, often associated with hid
den structure in the objective. One typical example is the proximal mappi
ng for the matrix numerical radius\, whose output is surprisingly often a
"disk" matrix. Motivated by this expectation of partial smoothness\, this
talk describes a Newtonian black-box algorithm for general nonsmooth opti
mization. Local convergence is provably superlinear on a representative c
lass of objectives\, and early numerical experience is promising more gene
rally.\n\nJoint work with Dima Drusvyatskiy\, XY Han\, Alex Ioffe\, Jingwe
i Liang\, Michael Overton\, Tonghua Tian\, Calvin Wylie\n\nthe address and
password of the zoom room of the seminar are sent by e-mail on the mailin
glist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Stephen Wright (University of Wisconsin)
DTSTART;VALUE=DATE-TIME:20200928T133000Z
DTEND;VALUE=DATE-TIME:20200928T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/19
DESCRIPTION:Title: Se
cond-Order Methods for Nonconvex Optimization with Complexity Guarantees\nby Stephen Wright (University of Wisconsin) as part of One World Optim
ization seminar\n\n\nAbstract\nWidely used algorithms for smooth nonconvex
optimization problems - unconstrained\, bound-constrained\, and general e
quality-constrained - can be modified slightly to ensure that approximate
first- and\nsecond-order optimal points are found\, with complexity guaran
tees that depend on the desired accuracy. We discuss methods constructed f
rom Newton's method\, conjugate gradients\, randomized Lanczos\, trust-reg
ion\nframeworks\, log-barrier\, and augmented Lagrangians. We derive upper
bounds on various measures of complexity in terms of the tolerances requi
red. Our methods use Hessian information only in the form of Hessian-vecto
r products - an operation that does not require the Hessian itself to be e
valuated or stored explicitly.\n\nthe address and password of the zoom roo
m of the seminar are sent by e-mail on the mailinglist of the seminar one
day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Aris Daniilidis (University of Chile / Autonomous University of Ba
rcelona)
DTSTART;VALUE=DATE-TIME:20201005T133000Z
DTEND;VALUE=DATE-TIME:20201005T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/20
DESCRIPTION:Title: As
ymptotic Study of the Sweeping Process\nby Aris Daniilidis (University
of Chile / Autonomous University of Barcelona) as part of One World Optim
ization seminar\n\n\nAbstract\nLet $r\\mapsto S(r)$ be a set-valued mappin
g with nonempty values and a closed semi-algebraic graph (or more generall
y\, with a graph which is definable in some o-minimal structure). We shall
be interested in the asymptotic behavior of the orbits of the so-called s
weeping process $$\\dot x(r) \\in - N_{S(r)}\, \\quad r>0.\\hspace{2cm} (S
PO)$$ \n\nKurdyka (Ann. Inst. Fourier\, 1998)\, in the framework of a grad
ient dynamics of a $C^1$-smooth definable function $f$\, generalized the L
ojasiewicz inequality and obtained a control of the asymptotic behavior of
the gradient orbits in terms of a desingularizing function $\\Psi$ depend
ing on $f$. We shall show that an analogous technique to the one used by K
urdyka can be replicated to our setting for the sweeping dynamics. Our met
hod recovers the aforementioned result of Kurdyka\, by simply considering
the sweeping process defined by the sublevel sets of the function $f$: ind
eed\, in this case setting $S(r) = [f \\leq r]$\, we deduce that the orbit
s of (SPO) are in fact gradient orbits for $f$\, and\nthe nowadays called
(smooth) Kurdyka-Lojasiwiecz inequality is recovered.\n\nThis talk is base
d on a work in collaboration with D. Drusvyatskiy (Seattle).\n\nthe addres
s and password of the zoom room of the seminar are sent by e-mail on the m
ailinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:R. Tyrrell Rockafellar (University of Washington)
DTSTART;VALUE=DATE-TIME:20201012T133000Z
DTEND;VALUE=DATE-TIME:20201012T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/21
DESCRIPTION:Title: Au
gmented Lagrangians and Hidden Convexity in Sufficient Conditions for Loca
l Optimality\nby R. Tyrrell Rockafellar (University of Washington) as
part of One World Optimization seminar\n\n\nAbstract\nSecond-order suffici
ent conditions for local optimality have long been central to designing so
lution algorithms and justifying claims about their convergence. In this
talk a far-reaching extension of such conditions\, called variational suff
iciency\, will be explained in territory beyond just nonlinear programming
. Variational sufficiency is already known to support multiplier methods
that are able\, even without convexity\, to achieve problem decomposition\
, but further insight has been needed to see how it coordinates with other
sufficient conditions. In fact it characterizes local optimality in term
s of having a convex-concave-type local saddle point of an augmented Lagra
ngian function. \n\nA stronger version of variational sufficiency corresp
onds in turn to local strong convexity in the primal argument of that func
tion and a property of augmented tilt stability which offers crucial aid t
o Lagrange multiplier methods at a fundamental level of analysis. Moreove
r\, that strong version can be translated through second-order variational
analysis into statements which may readily be compared to existing suffic
ient conditions in nonlinear programming\, second-order cone programming\,
and other problem formulations that are able to incorporate nonsmooth obj
ectives and regularization terms.\n\nthe address and password of the zoom
room of the seminar are sent by e-mail on the mailinglist of the seminar o
ne day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Boris Polyak (Institute for Control Science Moscow)
DTSTART;VALUE=DATE-TIME:20201019T133000Z
DTEND;VALUE=DATE-TIME:20201019T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/22
DESCRIPTION:Title: St
atic Linear Feedback for Control as Optimization Problem\nby Boris Pol
yak (Institute for Control Science Moscow) as part of One World Optimizati
on seminar\n\n\nAbstract\nIf we fix control as static linear feedback\,
an optimal control problem reduces to optimization problem with respect to
the feedback gain matrix. We consider properties of the arising performan
ce functions (smoothness\, convexity\, connectedness of sublevel sets) and
provide gradient-like methods for optimization. The following examples ar
e addressed: linear quadratic regulator\; static output feedback\; design
of low-order controllers. Possible extensions are discussed.\n\nTalk in co
llaboration with I.Fatkhullin\, P.Scherbakov\, M.Khlebnikov\n\nthe address
and password of the zoom room of the seminar are sent by e-mail on the ma
ilinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Dmitriy Drusvyatskiy (University of Washington)
DTSTART;VALUE=DATE-TIME:20201123T143000Z
DTEND;VALUE=DATE-TIME:20201123T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/23
DESCRIPTION:Title: St
ochastic Optimization with Decision-dependent Distributions\nby Dmitri
y Drusvyatskiy (University of Washington) as part of One World Optimizatio
n seminar\n\n\nAbstract\nStochastic optimization problems often involve da
ta\ndistributions that change in reaction to the decision variables. For\n
example\, deployment of a classifier by a learning system\, when made\npub
lic\, often causes the population to adapt their attributes in order\nto i
ncrease the likelihood of being favorably labeled---a process\ncalled ``ga
ming''. Even when the population is agnostic to the\nclassifier\, the deci
sions made by the learning system (e.g. loan\napproval) may inadvertently
alter the profile of the population (e.g.\ncredit score). Recent works hav
e identified an intriguing solution\nconcept for such problems as an ``equ
ilibrium'' of a certain game.\nContinuing this line of work\, we show that
typical stochastic\nalgorithms---originally designed for static problems-
--can be applied\ndirectly for finding such equilibria with little loss in
efficiency.\nThe reason is simple to explain: the main consequence of the
\ndistributional shift is that it corrupts the algorithms with a bias\ntha
t decays linearly with the distance to the solution. Using this\nperspecti
ve\, we obtain sharp convergence guarantees for popular\nalgorithms\, suc
h as stochastic gradient\, clipped gradient\, proximal\npoint\, and dual a
veraging methods\, along with their accelerated and\nproximal variants.\n\
nJoint work with Lin Xiao (Facebook AI Research)\n\nThe address and passwo
rd of the zoom room of the seminar are sent by e-mail on the mailinglist o
f the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael P. Friedlander (University of British Columbia)
DTSTART;VALUE=DATE-TIME:20201026T143000Z
DTEND;VALUE=DATE-TIME:20201026T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/24
DESCRIPTION:Title: Po
lar deconvolution of mixed signals\nby Michael P. Friedlander (Univers
ity of British Columbia) as part of One World Optimization seminar\n\n\nAb
stract\nThe signal demixing problem seeks to separate the superposition of
multiple signals into its constituent components. We model the superposit
ion process as the polar convolution of atomic sets\, which allows us to u
se the duality of convex cones to develop an efficient two-stage algorithm
with sublinear iteration complexity and linear storage. If the signal mea
surements are random\, the polar deconvolution approach stably recovers lo
w-complexity and mutually-incoherent signals with high probability and wit
h optimal sample complexity. Numerical experiments on both real and synthe
tic data confirm the theory and efficiency of the proposed approach.\n\nJo
int work with Zhenan Fan\, Halyun Jeong\, and Babhru Joshi at the Universi
ty of British Columbia.\n\nThe address and password of the zoom room of th
e seminar are sent by e-mail on the mailinglist of the seminar one day bef
ore each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Defeng Sun (Hong Kong Polytechnic University)
DTSTART;VALUE=DATE-TIME:20201130T143000Z
DTEND;VALUE=DATE-TIME:20201130T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/25
DESCRIPTION:Title: Se
veral Observations about Using the ALM + Semismooth Newton Method for Solv
ing Large Scale Semidefinite Programming and Beyond\nby Defeng Sun (Ho
ng Kong Polytechnic University) as part of One World Optimization seminar\
n\n\nAbstract\nSemidefinite Programming (SDP) has been one of the major re
search fields in optimization during the last three decades and interior p
oint methods (IPMs) are perhaps the most robust and efficient algorithms f
or solving small to medium sized SDP problems. For large scale SDPs\, IPMs
are no longer viable due to their inherent high memory requirements and c
omputational costs at each iteration. In this talk\, we will summarize wh
at we observed during the last 15 years or so in combining the augmented L
agrangian algorithm with the semismooth Newton method for solving the dual
of SDP and convex quadratic SDP of large scales. We will emphasize the i
mportance of the constraint non-degeneracy in numerical implementations an
d the quadratic growth condition in convergence rate analysis. Easy-to-imp
lement stopping criteria for the augmented Lagrangian subproblems will als
o be introduced. All these features are implemented in the publically avai
lable software packages SDPNAl/SDPNAL+ and QSDPNAL.\n\nThe address and pas
sword of the zoom room of the seminar are sent by e-mail on the mailinglis
t of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/25/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Guoyin Li (University of New South Wales)
DTSTART;VALUE=DATE-TIME:20201207T143000Z
DTEND;VALUE=DATE-TIME:20201207T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/26
DESCRIPTION:Title: Es
timating the Exponents of Kurdyka-Łojasiewicz (KL) Inequality and Error B
ounds for Optimization Models\nby Guoyin Li (University of New South W
ales) as part of One World Optimization seminar\n\n\nAbstract\nThe Kurdyka
-Łojasiewicz (KL) inequality and error bounds are two fundamental tools f
or establishing convergence of many numerical methods. In particular\, the
exponents of the KL inequality and error bounds play an important role in
estimating the convergence rate of many contemporary first-order methods.
Nevertheless\, these exponents are extremely hard to estimate in general\
, particularly in the case where the associated mappings are not polyhedra
l. In this talk\, we will outline some strategies in estimating or identif
ying these exponents by exploiting the so-called inf-projection operation
and specific structure such as polynomial structure\, semi-definite cone p
rogram representability and $C^2$-cone reducible structures.\n\nThe addres
s and password of the zoom room of the seminar are sent by e-mail on the m
ailinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/26/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ya-xiang Yuan (Chinese Academy of Sciences)
DTSTART;VALUE=DATE-TIME:20201214T143000Z
DTEND;VALUE=DATE-TIME:20201214T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/27
DESCRIPTION:Title: Or
thogonality-free Approaches for Optimization Problems on Stiefel Manifold<
/a>\nby Ya-xiang Yuan (Chinese Academy of Sciences) as part of One World O
ptimization seminar\n\n\nAbstract\nIn this talk\, I will discuss some orth
ogonality-free approaches for optimization problems on Stiefel manifold. S
tiefel manifold consists of matrices with orthogonal columns. Optimization
problems with orthogonality constraints appear in many important applicat
ions such as leading eigenvalues computation\, discretized Kohn-Sham total
energy minimization\, and sparse principal component analysis. We present
new algorithms for solving optimization problems on Stieful manifold. The
se algorithms are based on penalty functions\, thus there are no needs to
carry out orthogonalization calculations in each iteration. The major comp
utation cost of orthogonality-free algorithms is in the form of matrix-mat
rix multiplication\, which has the advantage of being parallelized easily.
Problems with both smooth and nonsmooth objective functions are considere
d. Theoretical properties of our algorithms are discussed and numerical ex
periments are also presented.\n\nThe address and password of the zoom room
of the seminar are sent by e-mail on the mailinglist of the seminar one d
ay before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Uday V. Shanbhag (Pennsylvania State University)
DTSTART;VALUE=DATE-TIME:20201102T143000Z
DTEND;VALUE=DATE-TIME:20201102T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/28
DESCRIPTION:Title: In
exact and Distributed Best-Response Schemes for Stochastic Nash Equilibriu
m Problems\nby Uday V. Shanbhag (Pennsylvania State University) as par
t of One World Optimization seminar\n\n\nAbstract\nWe consider the class o
f Nash equilibrium problems where players solve convex optimization proble
ms with expectation-valued objectives. In the first part of the presentati
on\, we discuss a class of inexact best-response schemes in which an inexa
ct best-response step is computed via stochastic approximation. We conside
r synchronous\, asynchronous\, and randomized schemes and provide rate and
complexity guarantees in each instance. In the second part of the present
ation\, we consider distributed best-response schemes for aggregative game
s. In such settings\, an (inexact) best-response step is overlaid with a c
onsensus step. In addition to the oracle and iteration complexity\, we exa
mine the communication complexity of such schemes for computing suitably d
efined ϵ-stochastic Nash equilibria.\n\nThis first part of this is joint
work with Jinlong Lei\, Jong-Shi Pang and Suvrajeet Sen while the second p
art of this work is joint with Jinlong Lei.\n\nThe address and password of
the zoom room of the seminar are sent by e-mail on the mailinglist of the
seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/28/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ingrid Daubechies (Duke University)
DTSTART;VALUE=DATE-TIME:20201109T143000Z
DTEND;VALUE=DATE-TIME:20201109T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/29
DESCRIPTION:Title: Di
scovering low-dimensional manifolds in high-dimensional data sets\nby
Ingrid Daubechies (Duke University) as part of One World Optimization semi
nar\n\n\nAbstract\nDiffusion methods help understand and denoise data sets
\;\nwhen there is additional structure (as is often the case)\, one can us
e\n(and get additional benefit from) a fiber bundle model.\n\nThe address
and password of the zoom room of the seminar are sent by e-mail on the mai
linglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/29/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Asu Ozdaglar (Massachusetts Institute of Technology)
DTSTART;VALUE=DATE-TIME:20201116T143000Z
DTEND;VALUE=DATE-TIME:20201116T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/30
DESCRIPTION:Title: Ro
bustness in Machine Learning and Optimization: A Minmax Approach\nby A
su Ozdaglar (Massachusetts Institute of Technology) as part of One World O
ptimization seminar\n\n\nAbstract\nMinmax problems arise in a large number
of problems in optimization\, including worst-case design\, duality theor
y\, and zero-sum games\, but also have become popular in machine learning
in the context of adversarial robustness and Generative Adversarial Networ
ks (GANs). This talk will review our recent work on solving minmax problem
s using discrete-time gradient based optimization algorithms. We focus on
Optimistic Gradient Descent Ascent (OGDA) and Extra-gradient (EG) methods\
, which have attracted much attention in the recent literature because of
their superior empirical performance in GAN training. We show that OGDA a
nd EG can be seen as approximations of the classical proximal point method
and use this interpretation to establish convergence rate guarantees for
these algorithms. These guarantees are provided for the ergodic (averaged)
iterates of the algorithms. We also consider the last iterate of EG and
present convergence rate guarantees for the last iterate for smooth convex
-concave saddle point problems. We finally turn to analysis of generalizat
ion properties of gradient based minmax algorithms using the algorithmic s
tability framework defined by Bousquet and Elisseeff. Our generalization a
nalysis suggests superiority of gradient descent ascent (GDA) compared to
GDmax algorithm (which involves exact solution of the maximization problem
at each iteration) in the nonconvex-concave case provided that similar le
arning rates are used in the descent and ascent steps.\n\nThe address and
password of the zoom room of the seminar are sent by e-mail on the mailing
list of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/30/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Monique Laurent (CWI Amsterdam & Tilburg University)
DTSTART;VALUE=DATE-TIME:20210125T143000Z
DTEND;VALUE=DATE-TIME:20210125T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/31
DESCRIPTION:Title: Su
m-of-Squares Approximation Hierarchies for Polynomial Optimization\nby
Monique Laurent (CWI Amsterdam & Tilburg University) as part of One World
Optimization seminar\n\n\nAbstract\nMinimizing a polynomial function over
a compact set is a computationally hard problem\, already when restrictin
g to quadratic polynomials and to simple sets like a ball\, a hypercube\,
or a simplex. We discuss some hierarchies of bounds introduced by Lasserre
\, that are based on searching for an optimal sum-of-squares density funct
ion minimizing the expected value of f over K.\n\nWe will discuss several
techniques that permit to analyse the performance guarantee of these bound
s depending on the degree of the sum-of-square density. This includes usin
g an eigenvalue reformulation of the bounds\, links to extremal roots of o
rthogonal polynomials\, and reducing to the univariate case by means of pu
sh-forward measures.\n\nBased on joint works with Etienne de Klerk and Luc
as Slot.\n\nThe address and password of the zoom room of the seminar are s
ent by e-mail on the mailinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/31/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hedy Attouch (University of Montpellier)
DTSTART;VALUE=DATE-TIME:20210118T143000Z
DTEND;VALUE=DATE-TIME:20210118T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/32
DESCRIPTION:Title: Ac
celeration of First-Order Optimization Algorithms via Inertial Dynamics wi
th Hessian Driven Damping\nby Hedy Attouch (University of Montpellier)
as part of One World Optimization seminar\n\n\nAbstract\nIn a Hilbert spa
ce\, for convex optimization\, we report on recent advances regarding the
acceleration of first-order algorithms. We rely on inertial dynamics with
damping driven by the Hessian\, and the link between continuous dynamic sy
stems and algorithms obtained by temporal discretization. We first review
the classical results\, from Polyak's heavy ball with friction method to N
esterov's accelerated gradient method. Then we introduce the damping drive
n by the Hessian which intervenes in the dynamic in the form $\\nabla^2f(x
(t))\\dot{x}(t)$. By treating this term as the time derivative of $\\nabla
f(x(t))$\, this gives\, in discretized form\, first-order algorithms. As
a fundamental property\, this geometric damping makes it possible to atten
uate the oscillations. In addition to the fast convergence of the values\,
the algorithms thus obtained show a rapid convergence towards zero of the
gradients. The introduction of time scale factors further accelerates the
se algorithms. On the basis of a regularization technique using the Moreau
envelope\, we extend the method to non-smooth convex functions with exten
ded real values. Numerical results for structured optimization problems su
pport our theoretical findings. Finally\, we evoke recent development conc
erning the extension of these results to the case of general monotone incl
usions\, inertial ADMM algorithms\, dry friction\, inexact/stochastic case
\, thus showing the versatility of the method.\n\nThis lecture is based on
the recent collaborative article:\nH. Attouch\, Z. Chbani\, J. Fadili\, H
. Riahi\, First-order optimization algorithms via inertial systems with H
essian driven damping\, Math. Program.\, (2020)\, https://doi.org/10.1007/
s10107-020-01591-1\, preprint available at hal-02193846.\n\nThe address a
nd password of the zoom room of the seminar are sent by e-mail on the mail
inglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/32/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Boris Mordukhovich (Wayne State University)
DTSTART;VALUE=DATE-TIME:20210201T143000Z
DTEND;VALUE=DATE-TIME:20210201T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/33
DESCRIPTION:Title: Ge
neralized Newton Algorithms For Nonsmooth Systems With Applications To Las
so\nby Boris Mordukhovich (Wayne State University) as part of One Worl
d Optimization seminar\n\n\nAbstract\nWe propose and develop several gener
alized Newton-type algorithms to solve nonsmooth optimization problems and
subgradient systems that are based on constructions and results of (mainl
y second-order) variational analysis and generalized differentiation. Solv
ability of these algorithms is proved in rather broad settings\, and then
verifiable conditions for their local and global superlinear convergence a
re obtained. A special attention is paid to problems of convex composite o
ptimization for which a generalized damped Newton algorithm exhibiting glo
bal superlinear convergence is designed. The efficiency of the latter algo
rithm is demonstrated by solving a class of Lasso problems that are well-r
ecognized in applications to machine learning and statistics. For this cla
ss of nonsmooth optimization problems\, we conduct numerical experiments a
nd compare the obtained results with those achieved by using other first-o
rder and second-order methods.\n\nThis talk is based on recent joint works
with P. D. Khanh (HCMUE)\, V. T. Phat (WSU)\, M. E. Sarabi (Miami Univ.)\
, and D. B. Tran (WSU).\n\nThe address and password of the zoom room of th
e seminar are sent by e-mail on the mailinglist of the seminar one day bef
ore each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/33/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Katya Scheinberg (ORIE Cornell)
DTSTART;VALUE=DATE-TIME:20210215T143000Z
DTEND;VALUE=DATE-TIME:20210215T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/34
DESCRIPTION:Title: Co
mplexity Analysis Framework of Adaptive Optimization Methods via Martingal
es\nby Katya Scheinberg (ORIE Cornell) as part of One World Optimizati
on seminar\n\n\nAbstract\nWe will present a very general framework for unc
onstrained adaptive optimization which encompasses standard methods such a
s line search and trust region methods that use stochastic function measur
ements and/or derivatives. In particular\, methods that fall in this frame
work retain desirable practical features such as step acceptance criterion
\, trust region adjustment and ability to utilize second order models and
enjoy the same convergence rates as their deterministic counterparts. The
framework is based on bounding the expected stopping time of a stochastic
process\, which satisfies certain assumptions. Thus this framework provide
s strong convergence analysis under weaker conditions than alternative app
roaches in the literature. We will conclude with a discussion about some i
nteresting open questions.\n\nThe address and password of the zoom room of
the seminar are sent by e-mail on the mailinglist of the seminar one day
before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/34/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Antonin Chambolle (CMAP / École Polytechnique Palaiseau)
DTSTART;VALUE=DATE-TIME:20210301T143000Z
DTEND;VALUE=DATE-TIME:20210301T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/35
DESCRIPTION:Title: De
rivatives of Solutions of Saddle-Point Problems\nby Antonin Chambolle
(CMAP / École Polytechnique Palaiseau) as part of One World Optimization
seminar\n\n\nAbstract\nIn a recent paper\, we have been interested in opti
mizing the quality of the solutions of convex optimization problems among
a class of consistent approximations of the total variation. Such a proble
m requires an efficient way to derivate a loss function with respect to th
e solution of a convex problem\, computed by an iterative algorithm for wh
ich classical back-propagation is not always possible\, due to memory limi
tation. We will describe in this talk a simple way to compute the adjoint
states which allows to estimate such gradients\, and discuss issues relati
ve to the smoothness of the objective.\n\nJoint work with T. Pock (TU Graz
).\n\nThe address and password of the zoom room of the seminar are sent by
e-mail on the mailinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/35/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Roberto Cominetti (Adolfo Ibáñez University)
DTSTART;VALUE=DATE-TIME:20210208T143000Z
DTEND;VALUE=DATE-TIME:20210208T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/36
DESCRIPTION:Title: Co
nvergence Rates for Krasnoselskii-Mann Fixed-Point Iterations\nby Robe
rto Cominetti (Adolfo Ibáñez University) as part of One World Optimizati
on seminar\n\n\nAbstract\nA popular method to approximate a fixed point of
a non-expansive map $T : C \\to C$ is the Krasnoselskii-Mann iteration \n
\n$$(KM)\\ \\ \\ \\hspace{3cm} x_{n+1} = (1 − \\alpha_{n+1}) x_n + \\
alpha_{n+1} T xn.$$\n\nThis covers a wide range of iterative methods in c
onvex minimization\, equilibria\, and beyond. In the Euclidean setting\, a
flexible method to obtain convergence rates for this iteration is the PEP
methodology introduced by Drori and Teboulle (2012)\, which is based on s
emi-definite programming. When the underlying norm is no longer Hilbert\,
PEP can be substituted by an approach based on recursive estimates obtaine
d by using optimal transport. This approach can be traced back to early wo
rk by Baillon and Bruck (1992\, 1996). In this talk we describe this optim
al transport technique\, and we survey some recent progress that settles t
wo conjectures by Baillon and Bruck\, and yields the following tight metri
c estimate for the fixed-point residuals\n\n$$x_n – T x_n = \\frac{diam
(C)}{\\pi \\sum_(k=1)^n \\alpha_k (1-\\alpha_k)}.$$\n\nThe recursive esti
mates exhibit a very rich structure and induce a very peculiar metric over
the integers. The analysis exploits an unexpected connection with discret
e probability and combinatorics\, related to the Gambler’s ruin for sums
of non-homogeneous Bernoulli trials. If time allows\, we will briefly dis
cuss the extension to inexact iterations\, and a connection to Markov chai
ns with rewards.\n\nNote: The talk will be based on joint work with Mario
Bravo\, Matías Pavez-Signé\, José Soto\, and José Vaisman. Papers are
available at https://sites.google.com/site/cominettiroberto/.\n\nThe addre
ss and password of the zoom room of the seminar are sent by e-mail on the
mailinglist of the seminar one day before each talk.\n
LOCATION:https://researchseminars.org/talk/OWOS/36/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jonathan Eckstein (Rutgers University)
DTSTART;VALUE=DATE-TIME:20210222T143000Z
DTEND;VALUE=DATE-TIME:20210222T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/37
DESCRIPTION:Title: Pr
ogressive Hedging and Asynchronous Projective Hedging for Convex Stochasti
c Programming\nby Jonathan Eckstein (Rutgers University) as part of On
e World Optimization seminar\n\n\nAbstract\nOperator splitting methods for
convex optimization and monotone inclusions have their roots in the solut
ion of partial differential equations\, and have since become popular in m
achine learning and image processing applications. Their application to "
operations-research-style" optimization problems has been somewhat limited
.\n\nA notable exception is their application to stochastic programming.
In a paper published in 1991\, Rockafellar and Wets proposed the progressi
ve hedging (PH) algorithm to solve large-scale convex stochastic programmi
ng problems. Although they proved the convergence of the method from firs
t principles\, it was already known to them that PH was an operator splitt
ing method.\n\nThis talk will present a framework for convex stochastic pr
ogramming and show that applying the ADMM (and thus Douglas-Rachford split
ting) to it yields the PH algorithm. The equivalence of PH to ADMM has lo
ng been known but not explicitly published.\n\nNext\, the talk will apply
the projective splitting framework of Combettes and Eckstein to the same f
ormulation\, yielding a method which is similar to PH but can be implement
ed in a partially aynchronous manner. We call this method "asynchronous p
rojective hedging" (APH). Unlike most decomposition methods\, it does not
need to solve every subproblem at every iteration\; instead\, each iterati
on may solve just a single subproblem or a small subset of the available s
ubproblems.\n\nFinally\, the talk will describe work integrating the APH a
lgorithm into mpi-sppy\, a Python package for modeling and distributed par
allel solution of stochastic programming problems. Mpi-sppy uses the Pyomo
Python-based optimization modeling sytem. Our experience includes using
8\,000 processor cores to solve a test problem instance with 1\,000\,000 s
cenarios.\n\nThis talk presents joint research with Jean-Paul Watson (Lawr
ence Livermore National Laboratory\, USA)\, and David Woodruff (University
of California\, Davis).\n\nThe address and password of the zoom room of t
he seminar are sent by e-mail on the mailinglist of the seminar one day be
fore each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/37/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Gabriel Peyré (CNRS & École Normale Supérieure)
DTSTART;VALUE=DATE-TIME:20210322T143000Z
DTEND;VALUE=DATE-TIME:20210322T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/38
DESCRIPTION:Title: Sc
aling Optimal Transport for High Dimensional Learning\nby Gabriel Peyr
é (CNRS & École Normale Supérieure) as part of One World Optimization s
eminar\n\n\nAbstract\nOptimal transport (OT) has recently gained lot of in
terest in machine learning. It is a natural tool to compare in a geometric
ally faithful way probability distributions. It finds applications in both
supervised learning (using geometric loss functions) and unsupervised lea
rning (to perform generative model fitting). OT is however plagued by the
curse of dimensionality\, since it might require a number of samples which
grows exponentially with the dimension. In this talk\, I will review entr
opic regularization methods which define geometric loss functions approxim
ating OT with a better sample complexity. More information and references
can be found on the website of our book "Computational Optimal Transport"
https://optimaltransport.github.io/\n\nThe address and password of the zoo
m room of the seminar are sent by e-mail on the mailinglist of the seminar
one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/38/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Lieven Vandenberghe (UCLA)
DTSTART;VALUE=DATE-TIME:20210308T143000Z
DTEND;VALUE=DATE-TIME:20210308T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/39
DESCRIPTION:Title: Br
egman Proximal Methods for Semidefinite Optimization\nby Lieven Vanden
berghe (UCLA) as part of One World Optimization seminar\n\n\nAbstract\nGen
eralized proximal methods based on Bregman distances offer the possibility
of matching the distance to the structure in the problem\, with the goal
of reducing the complexity per iteration. In semidefinite optimization\, t
he use of a generalized distance can allow us to avoid expensive eigendeco
mpositions\, needed in standard proximal methods for Euclidean projections
on the positive semidefinite cone. We discuss applications to sparse semi
definite optimization\, and to other types of structure that are common in
control and signal processing\, such as Toeplitz structure.\n\nThe addres
s and password of the zoom room of the seminar are sent by e-mail on the m
ailinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/39/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Frank E. Curtis (Lehigh University)
DTSTART;VALUE=DATE-TIME:20210315T143000Z
DTEND;VALUE=DATE-TIME:20210315T153000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/40
DESCRIPTION:Title: SQ
P Methods for Deterministically Constrained Stochastic Optimization\nb
y Frank E. Curtis (Lehigh University) as part of One World Optimization se
minar\n\n\nAbstract\nStochastic gradient and related methods for solving s
tochastic optimization problems have been studied extensively in recent ye
ars. It has been shown that such algorithms and much of their convergence
and complexity guarantees extend in straightforward ways when one conside
rs problems involving simple constraints\, such as when one can perform pr
ojections onto the feasible region of the problem. However\, settings wit
h general nonlinear constraints have received less attention\, and many of
the approaches that have been proposed for solving such problems resort t
o using penalty or (augmented) Lagrangian methods\, which are often not th
e most effective strategies. In this work\, we propose and analyze stocha
stic optimization algorithms for deterministically constrained problems ba
sed on the sequential quadratic optimization (commonly known as SQP) metho
dology. We discuss the rationale behind our proposed techniques\, converg
ence in expectation and complexity guarantees for our algorithms\, and the
results of preliminary numerical experiments that we have performed.\n\nT
he address and password of the zoom room of the seminar are sent by e-mail
on the mailinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/40/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sylvain Sorin (CNRS & Sorbonne University)
DTSTART;VALUE=DATE-TIME:20210329T133000Z
DTEND;VALUE=DATE-TIME:20210329T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/41
DESCRIPTION:Title: No
-Regret Algorithms in On-Line Learning\, Games and Convex Optimization\nby Sylvain Sorin (CNRS & Sorbonne University) as part of One World Opti
mization seminar\n\n\nAbstract\nThe purpose of this talk is to underline l
inks between no-regret algorithms used in learning\, games and convex opti
mization.\\\\\nWe will describe and analyze Projected Dynamics\, Mirror De
scent and Dual Averaging.\\\\\n In particular we will study continuous an
d discrete time versions and their connections.\\\\\nWe will discuss:\n- l
ink with variational inequalities\,\\\\\n - speed of convergence of the
no-regret evaluation\,\\\\\n - convergence of the trajectories.\n\nThe a
ddress and password of the zoom room of the seminar are sent by e-mail on
the mailinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/41/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Henry Wolkowicz (University of Waterloo)
DTSTART;VALUE=DATE-TIME:20210405T133000Z
DTEND;VALUE=DATE-TIME:20210405T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/42
DESCRIPTION:Title: Ro
bust Interior Point Methods for Quantum Key Rate Computation for Quantum K
ey Distribution\nby Henry Wolkowicz (University of Waterloo) as part o
f One World Optimization seminar\n\n\nAbstract\nWe use facial reduction on
the nonlinear objective function and derive a stable reformulation of the
quantum key rate for finite dimensional quantum key distribution (QKD) pr
oblems. This avoids the difficulties for current algorithms from singulari
ties that arise due to loss of positive definiteness for the distributions
. This allows for the derivation of an efficient Gauss-Newton interior poi
nt approach. We provide provable lower and upper bounds for the hard nonli
near semidefinite programming problem.\n\nEmpirical evidence illustrates t
he strength of this approach as we obtain high accuracy solutions and theo
retically guaranteed upper and lower bounds for QKD. We compare with other
current approaches in the literature.\n\nJoint work with: Hao Hu\, Jiyoun
g Im\, Jie Lin\, Norbert Lutkenhaus.\n\nThe address and password of the zo
om room of the seminar are sent by e-mail on the mailinglist of the semina
r one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/42/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael Ulbrich (Technical University of Munich)
DTSTART;VALUE=DATE-TIME:20210412T133000Z
DTEND;VALUE=DATE-TIME:20210412T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/43
DESCRIPTION:Title: An
Approximation Scheme for Distributionally Robust Nonlinear Optimization w
ith Applications to PDE-Constrained Problems under Uncertainty\nby Mic
hael Ulbrich (Technical University of Munich) as part of One World Optimiz
ation seminar\n\n\nAbstract\nWe present a sampling-free approximation sche
me for distributionally robust nonlinear optimization (DRO). The DRO probl
em can be written in a bilevel form that involves maximal (i.e.\, worst ca
se) value functions of expectation of nonlinear functions that depend on t
he optimization variables and random parameters. The maximum values are ta
ken over an ambiguity set of probability measures which is defined by mome
nt constraints. To achieve a good compromise between tractability and accu
racy we approximate nonlinear dependencies of the\ncost / constraint funct
ions on the random parameters by quadratic Taylor expansions. This results
in an approximate DRO problem which on the lower level then involves valu
e functions of parametric trust-region problems and of parametric semidefi
nite programs. Using trust-region duality\, a barrier approach\, and other
techniques we construct gradient consistent smoothing functions for these
value functions and show global convergence of a corresponding homotopy m
ethod. We discuss the application of our approach to PDE constrained optim
ization under uncertainty and present numerical results.\n\nThe address an
d password of the zoom room of the seminar are sent by e-mail on the maili
nglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/43/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Asen L. Dontchev (University of Michigan)
DTSTART;VALUE=DATE-TIME:20210419T133000Z
DTEND;VALUE=DATE-TIME:20210419T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/44
DESCRIPTION:Title: Se
nsitivity Analysis without Derivatives\nby Asen L. Dontchev (Universit
y of Michigan) as part of One World Optimization seminar\n\n\nAbstract\nTh
e classical sensitivity analysis developed in the early days of optimizati
on and control revolves around determining derivatives of optimal values a
nd solutions with respect to parameters in the problem considered. In pro
blems with constraints however\, (standard) differentiability typically fa
ils. The idea to obtain implicit function theorems without differentiabil
ity goes back to Hildebrandt and Graves in their paper from 1927 and has
been developed for optimization problems in the 1980s. \nIn this talk
some major developments in sensitivity analysis of optimization problems
in the last several decades are outlined. Estimates for solution depende
nce on various perturbations are derived based on regularity properties o
f mappings involved in the description of the problem. Applications to ma
thematical programming\, numerical optimization and optimal control illu
strate the theoretical findings.\n\nThe address and password of the zoom r
oom of the seminar are sent by e-mail on the mailinglist of the seminar on
e day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/44/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bernd Sturmfels (MPI Leipzig)
DTSTART;VALUE=DATE-TIME:20210426T133000Z
DTEND;VALUE=DATE-TIME:20210426T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/45
DESCRIPTION:Title: Wa
sserstein Distance to Independence Models\nby Bernd Sturmfels (MPI Lei
pzig) as part of One World Optimization seminar\n\n\nAbstract\nAn independ
ence model for discrete random variables is a variety in a probability sim
plex. \nGiven any data distribution\, we seek to minimize the Wasserstein
distance to the model.\nThat distance comes from a polyhedral norm whose u
nit ball is dual to an alcoved polytope.\nThe solution to our optimization
problem is a piecewise algebraic function of the data.\nIn this talk we d
iscuss the algebraic and geometric structure of this function.\n\nReferenc
e: arXiv:2003.06725\n\nThe address and password of the zoom room of the se
minar are sent by e-mail on the mailinglist of the seminar one day before
each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/45/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jane J. Ye (University of Victoria)
DTSTART;VALUE=DATE-TIME:20210503T133000Z
DTEND;VALUE=DATE-TIME:20210503T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/46
DESCRIPTION:Title: On
Solving Bilevel Programming Problems\nby Jane J. Ye (University of Vi
ctoria) as part of One World Optimization seminar\n\n\nAbstract\nA bilevel
programming problem is a sequence of two optimization problems where the
constraint region of the upper level problem is determined implicitly by t
he solution set to the lower level problem. It can be used to model a two-
level hierarchical system where the two decision makers have different obj
ectives and make their decisions on different levels of hierarchy. Recentl
y more and more applications including those in machine learning have been
modelled as bilevel optimization problems. In this talk\, I will report s
ome recent developments in optimality conditions and numerical algorithms
for solving this class of very difficult optimization problems.\n\nThe add
ress and password of the zoom room of the seminar are sent by e-mail on th
e mailinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/46/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Marco Antonio López Cerdá (University of Alicante)
DTSTART;VALUE=DATE-TIME:20210517T133000Z
DTEND;VALUE=DATE-TIME:20210517T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/47
DESCRIPTION:Title: Pr
operties of Inequality Systems and Their Influence in Convex Optimization<
/a>\nby Marco Antonio López Cerdá (University of Alicante) as part of On
e World Optimization seminar\n\n\nAbstract\nIn this seminar\, different op
timality conditions are presented for the convex optimization problem wit
h an arbitrary number of constraints. One possible approach is to replace
the set of constraints by a single constraint involving the supremum fun
ction\, and to appeal to different characterizations of its subdifferenti
al. With a view to this goal\, we extend to infinite convex systems two c
onstraint qualifications that are crucial in semi-infinite linear program
ming. The first one\, called the Farkas-Minkowski property\, is global in
nature\, while the other one is a local property\, which is known as the
local Farkas-Minkowski property. Four different KKT-type optimality condi
tions are then deduced\, either exact or asymptotic\, under progressively
weaker constraint qualifications.\n\nThe address and password of the zoo
m room of the seminar are sent by e-mail on the mailinglist of the seminar
one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/47/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mikhail Solodov (IMPA Rio de Janeiro)
DTSTART;VALUE=DATE-TIME:20210524T133000Z
DTEND;VALUE=DATE-TIME:20210524T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/48
DESCRIPTION:Title: Re
gularized Smoothing for Solution Mappings of Convex Problems\, with Applic
ations to Two-Stage Stochastic Programming and Some Hierarchical Problems<
/a>\nby Mikhail Solodov (IMPA Rio de Janeiro) as part of One World Optimiz
ation seminar\n\n\nAbstract\nMany modern optimization problems involve in
the objective function solution mappings or\noptimal-value functions of ot
her optimization problems.\nIn most/many cases\, those solution mappings a
nd optimal-value functions are nonsmooth\,\nand the optimal-value function
is also possibly nonconvex (even if the defining data\nis smooth and conv
ex).\nMoreover\, stemming from solving optimization problems\, those solut
ion mappings and\nvalue-functions are usually not known explicitly\, via a
ny closed formulas. Hence\,\nthere is no formula to differentiate (even in
the sense of generalized derivatives).\nThis presents an obvious challeng
e for solving the "upper" optimization problem\,\nas derivatives therein c
annot be computed.\n\nWe present an approach to regularize and approximate
solution mappings of fully\nparametrized convex optimization problems tha
t combines interior penalty (log-barrier)\nwith Tikhonov regularization. B
ecause the regularized solution mappings are single-valued\nand smooth und
er reasonable conditions\, they can also be used to build a computationall
y\npractical smoothing for the associated optimal-value function.\n\nOne m
otivating application of interest is two-stage (possibly nonconvex) stocha
stic\nprogramming. In addition to theoretical properties\, numerical exper
iments are presented\,\ncomparing the approach with the bundle method for
nonsmooth optimization.\n\nAnother application is a certain class of hiera
rchical decision problems\nthat can be viewed as single-leader multi-follo
wer games.\nThe objective function of the leader involves the decisions of
the followers (agents)\,\nwhich are taken independently by solving their
own convex optimization problems.\nWe show how our approach is applicable
to derive both agent-wise and scenario-wise\ndecomposition algorithms for
this kind of problems.\nNumerical experiments and some comparisons with th
e complementarity solver PATH\nare shown for the two-stage stochastic Walr
asian equilibrium problem.\n\nThe address and password of the zoom room of
the seminar are sent by e-mail on the mailinglist of the seminar one day
before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/48/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Heinz H. Bauschke (University of British Columbia Okanagan)
DTSTART;VALUE=DATE-TIME:20210628T133000Z
DTEND;VALUE=DATE-TIME:20210628T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/49
DESCRIPTION:Title: Co
mpositions of Projection Mappings: Fixed Point Sets and Difference Vectors
\nby Heinz H. Bauschke (University of British Columbia Okanagan) as pa
rt of One World Optimization seminar\n\n\nAbstract\nProjection operators a
nd associated projection algorithms are fundamental building blocks in fix
ed point theory and optimization.\nIn this talk\, I will survey recent res
ults on the displacement mapping of the right-shift operator and sketch a
new application deepening our understanding of the geometry of the fixed p
oint set of the composition of projection operators in Hilbert space.\nBas
ed on joint works with Salha Alwadani\, Julian Revalski\, and Shawn Wang.\
n\nThe address and password of the zoom room of the seminar are sent by e-
mail on the mailinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/49/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael Hintermüller (Weierstrass Institute / Humboldt University
of Berlin)
DTSTART;VALUE=DATE-TIME:20210510T133000Z
DTEND;VALUE=DATE-TIME:20210510T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/50
DESCRIPTION:Title: Op
timization with Learning-Informed Differential Equation Constraints and It
s Applications\nby Michael Hintermüller (Weierstrass Institute / Humb
oldt University of Berlin) as part of One World Optimization seminar\n\n\n
Abstract\nInspired by applications in optimal control of semilinear ellipt
ic partial differential equations and physics-integrated imaging\, differe
ntial equation constrained optimization problems with constituents that ar
e only accessible through data-driven techniques are studied. A particular
focus is on the analysis and on numerical methods for problems with machi
ne-learned components. For a rather general context\, an error analysis is
provided\, and particular properties resulting from artificial neural net
work based approximations are addressed. Moreover\, for each of the two in
spiring applications analytical details are presented and numerical result
s are provided.\n\nThe address and password of the zoom room of the semina
r are sent by e-mail on the mailinglist of the seminar one day before each
talk\n
LOCATION:https://researchseminars.org/talk/OWOS/50/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Constantin Zălinescu (Octav Mayer Institute of Mathematics Iași)
DTSTART;VALUE=DATE-TIME:20210607T133000Z
DTEND;VALUE=DATE-TIME:20210607T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/51
DESCRIPTION:Title: On
the Role of Interiority Notions in Convex Analysis and Optimization\n
by Constantin Zălinescu (Octav Mayer Institute of Mathematics Iași) as p
art of One World Optimization seminar\n\n\nAbstract\nIt is well known that
in finite dimensions\, in order to get the formulae for the subdifferenti
als and conjugates of\nfunctions obtained by operations which preserve con
vexity or for getting strong duality results\, the sufficient conditions a
re expressed by using the relative interiors of the domains of the involve
d functions.\n\nIn the infinitely dimensional case\, several interiority n
otions are used: the algebraic (relative) interior\, the quasi (relative)\
ninterior and mixtures of these. It is the aim of our talk to discuss the
advantages and limitations of these notions.\n\nThe address and password o
f the zoom room of the seminar are sent by e-mail on the mailinglist of th
e seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/51/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Darinka Dentcheva (Stevens Institute of Technology)
DTSTART;VALUE=DATE-TIME:20210614T133000Z
DTEND;VALUE=DATE-TIME:20210614T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/52
DESCRIPTION:Title: Su
bregular Recourse in Multistage Stochastic Optimization\nby Darinka De
ntcheva (Stevens Institute of Technology) as part of One World Optimizatio
n seminar\n\n\nAbstract\nWe discuss nonlinear multistage stochastic optimi
zation problems in the spaces of integrable functions.\nThe problems may i
nclude nonlinear dynamics and general objective functionals with dynamic r
isk measures as a particular case.\nWe present analysis about the causal o
perators describing the dynamics of the system and the Clarke subdifferent
ial for a\npenalty function involving such operators. We introduce concept
of a subregular recourse in nonlinear multistage stochastic optimization\
n and establish subregularity of the resulting systems in two formulations
:\nwith built-in nonanticipativity and with explicit nonanticipativity con
straints.\nOptimality conditions for both formulations and their relations
will be presented.\nThis is a joint work with Andrzej Ruszczynski\, Rutge
rs University\, New Jersey.\n\nThe address and password of the zoom room o
f the seminar are sent by e-mail on the mailinglist of the seminar one day
before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/52/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Thomas Pock (Graz University of Technology)
DTSTART;VALUE=DATE-TIME:20210621T133000Z
DTEND;VALUE=DATE-TIME:20210621T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/53
DESCRIPTION:Title: Le
arning with Markov Random Field Models for Computer Vision\nby Thomas
Pock (Graz University of Technology) as part of One World Optimization sem
inar\n\n\nAbstract\nIn this talk I will show how learning techniques can b
e used to significantly improve the quality of discrete Markov Random Fiel
d (MRF) models. I will start by discussing fast algorithms that combine dy
namic programming with continuous optimization for solving MRF models. I t
hen show how their potentials can be learned from data to achieve state-of
-the-art performance for computer vision tasks such as stereo\, optical fl
ow and image segmentation.\n\nThe address and password of the zoom room of
the seminar are sent by e-mail on the mailinglist of the seminar one day
before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/53/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Daniel Kuhn (EPFL)
DTSTART;VALUE=DATE-TIME:20210531T133000Z
DTEND;VALUE=DATE-TIME:20210531T143000Z
DTSTAMP;VALUE=DATE-TIME:20210612T232352Z
UID:OWOS/54
DESCRIPTION:Title: A
General Framework for Optimal Data-Driven Optimization\nby Daniel Kuhn
(EPFL) as part of One World Optimization seminar\n\n\nAbstract\nWe propos
e a statistically optimal approach to construct data-driven decisions for
stochastic optimization problems. Fundamentally\, a data-driven decision i
s simply a function that maps the available training data to a feasible ac
tion. It can always be expressed as the minimizer of a surrogate optimizat
ion model constructed from the data. The quality of a data-driven decision
is measured by its out-of-sample risk. An additional quality measure is i
ts out-of-sample disappointment\, which we define as the probability that
the out-of-sample risk exceeds the optimal value of the surrogate optimiza
tion model. The crux of data-driven optimization is that the data-generati
ng probability measure is unknown. An ideal data-driven decision should th
erefore minimize the out-of-sample risk simultaneously with respect to eve
ry conceivable probability measure (and thus in particular with respect to
the unknown true measure). Unfortunately\, such ideal data-driven decisio
ns are generally unavailable. This prompts us to seek data-driven decision
s that minimize the out-of-sample risk subject to an upper bound on the ou
t-of-sample disappointment - again simultaneously with respect to every co
nceivable probability measure. We prove that such Pareto-dominant data-dri
ven decisions exist under conditions that allow for interesting applicatio
ns: the unknown data-generating probability measure must belong to a param
etric ambiguity set\, and the corresponding parameters must admit a suffic
ient statistic that satisfies a large deviation principle. If these condit
ions hold\, we can further prove that the surrogate optimization model gen
erating the optimal data-driven decision must be a distributionally robust
optimization problem constructed from the sufficient statistic and the ra
te function of its large deviation principle. This shows that the optimal
method for mapping data to decisions is\, in a rigorous statistical sense\
, to solve a distributionally robust optimization model. Maybe surprisingl
y\, this result holds irrespective of whether the original stochastic opti
mization problem is convex or not and holds even when the training data is
non-i.i.d. As a byproduct\, our analysis reveals how the structural prope
rties of the data-generating stochastic process impact the shape of the am
biguity set underlying the optimal distributionally robust optimization mo
del.\n\nThis is joint work with Tobias Sutter and Bart Van Parys.\n\nThe a
ddress and password of the zoom room of the seminar are sent by e-mail on
the mailinglist of the seminar one day before each talk\n
LOCATION:https://researchseminars.org/talk/OWOS/54/
END:VEVENT
END:VCALENDAR