BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Tiến-Sơn Phạm (University of Dalat)
DTSTART;VALUE=DATE-TIME:20200603T070000Z
DTEND;VALUE=DATE-TIME:20200603T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/1
DESCRIPTION:Title: Openness\, Hölder metric regularity and Hölder continuity properties o
f semialgebraic set-valued maps\nby Tiến-Sơn Phạm (University of
Dalat) as part of Variational Analysis and Optimisation Webinar\n\n\nAbstr
act\nGiven a semialgebraic set-valued map with closed graph\, we show that
it is Hölder metrically subregular and that the following conditions are
equivalent:\n\n(i) the map is an open map from its domain into its range
and the range of is locally closed\;\n\n(ii) the map is Hölder metrically
regular\;\n\n(iii) the inverse map is pseudo-Hölder continuous\;\n\n(iv)
the inverse map is lower pseudo-Hölder continuous.\n\nAn application\, v
ia Robinson’s normal map formulation\, leads to the following result in
the context of semialgebraic variational inequalities: if the solution map
(as a map of the parameter vector) is lower semicontinuous then the solut
ion map is finite and pseudo-Holder continuous. In particular\, we obtain
a negative answer to a question mentioned in the paper of Dontchev and Roc
kafellar [Characterizations of strong regularity for variational inequalit
ies over polyhedral convex sets. SIAM J. Optim.\, 4(4):1087–1105\, 1996]
. As a byproduct\, we show that for a (not necessarily semialgebraic) cont
inuous single-valued map\, the openness and the non-extremality are equiva
lent. This fact improves the main result of Pühn [Convexity and openness
with linear rate. J. Math. Anal. Appl.\, 227:382–395\, 1998]\, which req
uires the convexity of the map in question.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michel Théra (University of Limoges)
DTSTART;VALUE=DATE-TIME:20200617T070000Z
DTEND;VALUE=DATE-TIME:20200617T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/2
DESCRIPTION:Title: Old and new results on equilibrium and quasi-equilibrium problems\nb
y Michel Théra (University of Limoges) as part of Variational Analysis an
d Optimisation Webinar\n\n\nAbstract\nIn this talk I will briefly survey s
ome old results which are going back to Ky Fan and Brezis-Niremberg and St
ampacchia. Then I will give some new results related to the existence of s
olutions to equilibrium and quasi- equilibrium problems without any convex
ity assumption. Coverage includes some equivalences to the Ekeland variati
onal principle for bifunctions and basic facts about transfer lower contin
uity. An application is given to systems of quasi-equilibrium problems.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Marco A. López-Cerdá (University of Alicante)
DTSTART;VALUE=DATE-TIME:20200624T070000Z
DTEND;VALUE=DATE-TIME:20200624T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/3
DESCRIPTION:Title: Optimality conditions in convex semi-infinite optimization. An approach
based on the subdifferential of the supremum function\nby Marco A. Ló
pez-Cerdá (University of Alicante) as part of Variational Analysis and Op
timisation Webinar\n\n\nAbstract\nWe present a survey on optimality condit
ions (of Fritz-John and KKT-type) for semi-infinite convex optimization pr
oblems. The methodology is based on the use of the subdifferential of the
supremum of the infinite family of constraint functions. Our approach aims
to establish weak constraint qualifications and\, in the last step\, to d
ropp out the usual continuity/closedness assumptions which are standard in
the literature. The material in this survey is extracted from the follow
ing papers:\n\nR. Correa\, A. Hantoute\, M. A. López\, Weaker conditions
for subdifferential calculus of convex functions. J. Funct. Anal. 271 (201
6)\, 1177-1212.\n\nR. Correa\, A. Hantoute\, M. A. López\, Moreau-Rockafe
llar type formulas for the subdifferential of the supremum function. SIAM
J. Optim. 29 (2019)\, 1106-1130.\n\nR. Correa\, A. Hantoute\, M. A. López
\, Valadier-like formulas for the supremum function II: the compactly inde
xed case. J. Convex Anal. 26 (2019)\, 299-324.\n\nR. Correa\, A. Hantoute\
, M. A. López\, Subdifferential of the supremum via compactification of t
he index set. To appear in Vietnam J. Math. (2020).\n
LOCATION:https://researchseminars.org/talk/VAWebinar/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hoa Bui (Curtin University)
DTSTART;VALUE=DATE-TIME:20200708T070000Z
DTEND;VALUE=DATE-TIME:20200708T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/4
DESCRIPTION:Title: Zero Duality Gap Conditions via Abstract Convexity\nby Hoa Bui (Curt
in University) as part of Variational Analysis and Optimisation Webinar\n\
n\nAbstract\nUsing tools provided by the theory of abstract convexity\, we
extend conditions for zero duality gap to the context of nonconvex and no
nsmooth optimization. Substituting the classical setting\, an abstract con
vex function is the upper envelope of a subset of a family of abstract aff
ine functions (being conventional vertical translations of the abstract li
near functions). We establish new characterizations of the zero duality ga
p under no assumptions on the topology on the space of abstract linear fun
ctions. Endowing the latter space with the topology of pointwise convergen
ce\, we extend several fundamental facts of the conventional convex analys
is. In particular\, we prove that the zero duality gap property can be sta
ted in terms of an inclusion involving ε-subdifferentials\, which are sho
wn to possess a sum rule. These conditions are new even in conventional co
nvex cases. The Banach-Alaoglu-Bourbaki theorem is extended to the space o
f abstract linear functions. The latter result extends a fact recently est
ablished by Borwein\, Burachik and Yao in the conventional convex case.\n\
nThis talk is based on a joint work with Regina Burachik\, Alex Kruger and
David Yost.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:James Saunderson (Monash University)
DTSTART;VALUE=DATE-TIME:20200715T070000Z
DTEND;VALUE=DATE-TIME:20200715T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/5
DESCRIPTION:Title: Lifting for simplicity: concise descriptions of convex sets\nby Jame
s Saunderson (Monash University) as part of Variational Analysis and Optim
isation Webinar\n\n\nAbstract\nThis talk will give a selective tour throug
h the theory and applications of lifts of convex sets. A lift of a convex
set is a higher-dimensional convex set that projects onto the original set
. Many interesting convex sets have lifts that are dramatically simpler to
describe than the original set. Finding such simple lifts has significant
algorithmic implications\, particularly for associated optimization probl
ems. We will consider both the classical case of polyhedral lifts\, which
are described by linear inequalities\, as well as spectrahedral lifts\, wh
ich are defined by linear matrix inequalities. The tour will include discu
ssion of ways to construct lifts\, ways to find obstructions to the existe
nce of lifts\, and a number of interesting examples from a variety of math
ematical contexts. (Based on joint work with H. Fawzi\, J. Gouveia\, P. Pa
rrilo\, and R. Thomas).\n
LOCATION:https://researchseminars.org/talk/VAWebinar/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Akiko Takeda (University of Tokyo)
DTSTART;VALUE=DATE-TIME:20200729T070000Z
DTEND;VALUE=DATE-TIME:20200729T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/6
DESCRIPTION:Title: Deterministic and Stochastic Gradient Methods for Non-Smooth Non-Convex
Regularized Optimization\nby Akiko Takeda (University of Tokyo) as pa
rt of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nOur wor
k focuses on deterministic/stochastic gradient methods for optimizing a sm
ooth non-convex loss function with a non-smooth non-convex regularizer. Re
search on stochastic gradient methods is quite limited\, and until recentl
y no non-asymptotic convergence results have been reported. After showing
a deterministic approach\, we present simple stochastic gradient algorithm
s\, for finite-sum and general stochastic optimization problems\, which ha
ve superior convergence complexities compared to the current state-of-the-
art. We also compare our algorithms’ performance in practice for empiric
al risk minimization.\n\nThis is based on joint works with Tianxiang Liu\
, Ting Kei Pong and Michael R. Metel.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Evgeni Nurminski (Far Eastern Federal University)
DTSTART;VALUE=DATE-TIME:20200805T070000Z
DTEND;VALUE=DATE-TIME:20200805T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/7
DESCRIPTION:Title: Practical Projection with Applications\nby Evgeni Nurminski (Far Eas
tern Federal University) as part of Variational Analysis and Optimisation
Webinar\n\n\nAbstract\nProjection of a point on a given set is a very comm
on computational operation in an endless number of algorithms and applicat
ions. However\, with exception of simplest sets it by itself is a nontrivi
al operation which is often complicated by large dimension\, computational
degeneracy\, nonuniqueness (even for orthogonal projection on convex sets
in certain situations)\, and so on. This talk aims to present some practi
cal solutions\, i.e. finite algorithms\, for projection on polyhedral sets
\, among those: simplex\, polytopes\, polyhedron\, finite-generated cones
with a certain discussion of “nonlinearities”\, decomposition and para
llel computations. We also consider the application of projection operatio
n in linear optimization and epi-projection algorithm for convex optimizat
ion.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Xiaoqi Yang (The Hong Kong Polytechnic University)
DTSTART;VALUE=DATE-TIME:20200812T070000Z
DTEND;VALUE=DATE-TIME:20200812T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/8
DESCRIPTION:Title: On error bound moduli for locally Lipschitz and regular functions\nb
y Xiaoqi Yang (The Hong Kong Polytechnic University) as part of Variationa
l Analysis and Optimisation Webinar\n\n\nAbstract\nWe first introduce for
a closed and convex set two classes of subsets: the near and far ends rela
tive to a point\, and give some full characterizations for these end sets
by virtue of the face theory of closed and convex sets. We provide some co
nnections between closedness of the far (near) end and the relative contin
uity of the gauge (cogauge) for closed and convex sets. We illustrate that
the distance from 0 to the outer limiting subdifferential of the support
function of the subdifferential set\, which is essentially the distance fr
om 0 to the end set of the subdifferential set\, is an upper estimate of t
he local error bound modulus. This upper estimate becomes tight for a conv
ex function under some regularity conditions. We show that the distance fr
om 0 to the outer limiting subdifferential set of a lower C^1 function is
equal to the local error bound modulus.\n\n\nReferences:\nLi\, M.H.\, Meng
K.W. and Yang X.Q.\, On far and near ends of closed and convex sets. Jour
nal of Convex Analysis. 27 (2020) 407–421.\nLi\, M.H.\, Meng K.W. and Ya
ng X.Q.\, On error bound moduli for locally Lipschitz and regular function
s\, Math. Program. 171 (2018) 463–487.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Marián Fabian (Czech Academy of Sciences)
DTSTART;VALUE=DATE-TIME:20200701T070000Z
DTEND;VALUE=DATE-TIME:20200701T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/9
DESCRIPTION:Title: Can Pourciau’s open mapping theorem be derived from Clarke’s inverse
mapping theorem?\nby Marián Fabian (Czech Academy of Sciences) as pa
rt of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nWe disc
uss the possibility of deriving Pourciau’s open mapping theorem from Cla
rke’s inverse mapping theorem. These theorems work with the Clarke gener
alized Jacobian. In our journey\, we will face several interesting phenome
na and pitfalls in the world of (just) 2 by 3 matrices.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Oliver Stein (Karlsruhe Institute of Technology)
DTSTART;VALUE=DATE-TIME:20200722T070000Z
DTEND;VALUE=DATE-TIME:20200722T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/10
DESCRIPTION:Title: A general branch-and-bound framework for global multiobjective optimiza
tion\nby Oliver Stein (Karlsruhe Institute of Technology) as part of V
ariational Analysis and Optimisation Webinar\n\n\nAbstract\nWe develop a g
eneral framework for branch-and-bound methods in multiobjective optimizati
on. Our focus is on natural generalizations of notions and techniques from
the single objective case. In particular\, after the notions of upper and
lower bounds on the globally optimal value from the single objective case
have been transferred to upper and lower bounding sets on the set of nond
ominated points for multiobjective programs\, we discuss several possibili
ties for discarding tests. They compare local upper bounds of the provisio
nal nondominated sets with relaxations of partial upper image sets\, where
the latter can stem from ideal point estimates\, from convex relaxations\
, or from relaxations by a reformulation-linearization technique. \n \n
The discussion of approximation properties of the provisional nondominated
set leads to the suggestion for a natural selection rule along with a nat
ural termination criterion. Finally we discuss some issues which do not oc
cur in the single objective case and which impede some desirable convergen
ce properties\, thus also motivating a natural generalization of the conve
rgence concept.\n\nThis is joint work with Gabriele Eichfelder\, Peter Kir
st\, and Laura Meng.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Christiane Tammer (Martin Luther University Halle-Wittenberg)
DTSTART;VALUE=DATE-TIME:20200909T070000Z
DTEND;VALUE=DATE-TIME:20200909T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/11
DESCRIPTION:Title: Subdifferentials and Lipschitz properties of translation invariant func
tionals and applications\nby Christiane Tammer (Martin Luther Universi
ty Halle-Wittenberg) as part of Variational Analysis and Optimisation Webi
nar\n\n\nAbstract\nIn the talk\, we are dealing with translation invariant
functionals and their application for deriving necessary conditions for m
inimal solutions of constrained and unconstrained optimization problems wi
th respect to general domination sets.\n\nTranslation invariant functional
s are a natural and powerful tool for the separation of not necessarily co
nvex sets and scalarization. There are many applications of translation in
variant functionals in nonlinear functional analysis\, vector optimization
\, set optimization\, optimization under uncertainty\, mathematical financ
e as well as consumer and production theory.\n\nThe primary objective of t
his talk is to establish formulas for basic and singular subdifferentials
of translation invariant functionals and to study important properties suc
h as monotonicity\, the PSNC property\, the Lipschitz behavior\, etc. of t
hese nonlinear functionals without assuming that the shifted set involved
in the definition of the functional is convex. The second objective is to
propose a new way to scalarize a set-valued optimization problem. It allow
s us to study necessary conditions for minimal solutions in a very broad s
etting in which the domination set is not necessarily convex or solid or c
onical. The third objective is to apply our results to vector-valued appro
ximation problems.\n\nThis is a joint work with T.Q. Bao (Northern Michiga
n University).\n
LOCATION:https://researchseminars.org/talk/VAWebinar/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Gerd Wachsmuth (BTU)
DTSTART;VALUE=DATE-TIME:20200902T070000Z
DTEND;VALUE=DATE-TIME:20200902T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/12
DESCRIPTION:Title: New Constraint Qualifications for Optimization Problems in Banach Space
s based on Asymptotic KKT Conditions\nby Gerd Wachsmuth (BTU) as part
of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nOptimizati
on theory in Banach spaces suffers from the lack of available constraint q
ualifications. Despite the fact that there exist only a very few constrain
t qualifications\, they are\, in addition\, often violated even in simple
applications. This is very much in contrast to finite-dimensional nonlinea
r programs\, where a large number of constraint qualifications is known. S
ince these constraint qualifications are usually defined using the set of
active inequality constraints\, it is difficult to extend them to the infi
nite-dimensional setting. One exception is a recently introduced sequentia
l constraint qualification based on asymptotic KKT conditions. This paper
shows that this so-called asymptotic KKT regularity allows suitable extens
ions to the Banach space setting in order to obtain new constraint qualifi
cations. The relation of these new constraint qualifications to existing o
nes is discussed in detail. Their usefulness is also shown by several exam
ples as well as an algorithmic application to the class of augmented Lagra
ngian methods.\n\nThis is a joint work with Christian Kanzow (Würzburg) a
nd Patrick Mehlitz (Cottbus).\n
LOCATION:https://researchseminars.org/talk/VAWebinar/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Regina Burachik (UniSA)
DTSTART;VALUE=DATE-TIME:20200923T070000Z
DTEND;VALUE=DATE-TIME:20200923T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/13
DESCRIPTION:Title: A Primal–Dual Penalty Method via Rounded Weighted-$L_1$ Lagrangian Du
ality\nby Regina Burachik (UniSA) as part of Variational Analysis and
Optimisation Webinar\n\n\nAbstract\nWe propose a new duality scheme based
on a sequence of smooth minorants of the weighted-$l_1$ penalty function\,
interpreted as a parametrized sequence of augmented\nLagrangians\, to sol
ve nonconvex constrained optimization problems. For the induced sequence o
f dual problems\, we establish strong asymptotic duality properties. Namel
y\, we\nshow that (i) the sequence of dual problems is convex and (ii) the
dual values monotonically increase to the optimal primal value. We use th
ese properties to devise a subgradient based primal–dual method\, and sh
ow that the generated primal sequence accumulates at a solution of the ori
ginal problem. We illustrate the performance of the new method with three
different types of test problems: A polynomial nonconvex problem\, large-s
cale instances of the celebrated kissing number problem\, and the Markov
–Dubins problem. Our numerical experiments demonstrate that\, when compa
red with the traditional implementation of a well-known smooth solver\, ou
r new method (using the same well-known solver in its subproblem) can find
better quality solutions\, i.e.\, “deeper” local minima\, or solution
s closer to the global minimum. Moreover\, our method seems to be more tim
e efficient\, especially when the problem has a large number of constraint
s.\n\nThis is a joint work with C. Y. Kaya (UniSA) and C. J. Price (Univer
sity of Canterbury\, Christchurch\, New Zealand)\n
LOCATION:https://researchseminars.org/talk/VAWebinar/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Christopher Price (University of Canterbury)
DTSTART;VALUE=DATE-TIME:20200916T070000Z
DTEND;VALUE=DATE-TIME:20200916T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/14
DESCRIPTION:Title: A direct search method for constrained optimization via the rounded $l_
1$ penalty function.\nby Christopher Price (University of Canterbury)
as part of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nTh
is talk looks at the constrained optimization problem when the objective a
nd constraints are \nLipschitz continuous black box functions. The appro
ach uses a sequence of smoothed and offset $\\ell_1$ penalty functions. \n
The method generates an approximate minimizer to each penalty function\, a
nd then adjusts the offsets and other parameters.\nThe smoothing is steadi
ly reduced\, ultimately revealing the $\\ell_1$ exact penalty function. Th
e method preferentially uses\na discrete quasi-Newton step\, backed up by
a global direction search. \nTheoretical convergence results are given fo
r the smooth and non-smooth cases subject to relevant conditions. \nNumer
ical results are presented on a variety of problems with non-smooth object
ive or constraint functions. \nThese results show the method is effective
in practice.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yalçın Kaya (UniSA)
DTSTART;VALUE=DATE-TIME:20200930T070000Z
DTEND;VALUE=DATE-TIME:20200930T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/15
DESCRIPTION:Title: Constraint Splitting and Projection Methods for Optimal Control\nby
Yalçın Kaya (UniSA) as part of Variational Analysis and Optimisation We
binar\n\n\nAbstract\nWe consider a class of optimal control problems with
constrained control variable. We split the ODE constraint and the control
constraint of the problem so as to obtain two optimal control subproblems
for each of which solutions can be written simply. Employing these simple
r solutions as projections\, we find numerical solutions to the original p
roblem by applying four different projection-type methods: (i) Dykstra’s
algorithm\, (ii) the Douglas–Rachford (DR) method\, (iii) the Aragón A
rtacho–Campoy (AAC) algorithm and (iv) the fast iterative shrinkage-thre
sholding algorithm (FISTA). The problem we study is posed in infinite-dim
ensional Hilbert spaces. Behaviour of the DR and AAC algorithms are explor
ed via numerical experiments with respect to their parameters. An error an
alysis is also carried out numerically for a particular instance of the pr
oblem for each of the algorithms. This is joint work with Heinz Bauschke
and Regina Burachik.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hieu Thao Nguyen (TU Delft)
DTSTART;VALUE=DATE-TIME:20200819T070000Z
DTEND;VALUE=DATE-TIME:20200819T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/16
DESCRIPTION:Title: Projection algorithms for phase retrieval with high numerical aperture<
/a>\nby Hieu Thao Nguyen (TU Delft) as part of Variational Analysis and Op
timisation Webinar\n\n\nAbstract\nWe develop the mathematical framework in
which the class of projection algorithms can be applied to high numerical
aperture (NA) phase retrieval. Within this framework we first analyze the
basic steps of solving this problem by projection algorithms and establis
h the closed forms of all the relevant prox-operators. We then study the g
eometry of the high-NA phase retrieval problem and the obtained results ar
e subsequently used to establish convergence criteria of projection algori
thms. Making use of the vectorial point-spread-function (PSF) is\, on the
one hand\, the key difference between this work and the literature of phas
e retrieval mathematics which mostly deals with the scalar PSF. The result
s of this paper\, on the other hand\, can be viewed as extensions of those
concerning projection methods for low-NA phase retrieval. Importantly\, t
he improved performance of projection methods over the other classes of ph
ase retrieval algorithms in the low-NA setting now also becomes applicable
to the high-NA case. This is demonstrated by the accompanying numerical r
esults which show that all available solution approaches for high-NA phase
retrieval are outperformed by projection methods.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Reinier Diaz Millan (Deakin University)
DTSTART;VALUE=DATE-TIME:20201007T060000Z
DTEND;VALUE=DATE-TIME:20201007T070000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/17
DESCRIPTION:Title: An algorithm for pseudo-monotone operators with application to rational
approximation\nby Reinier Diaz Millan (Deakin University) as part of
Variational Analysis and Optimisation Webinar\n\n\nAbstract\nThe motivatio
n of this paper is the development of an optimisation method for solving o
ptimisation problems appearing in Chebyshev rational and generalised ratio
nal approximation problems\, where the approximations are constructed as r
atios of linear forms (linear combination of basis functions). The coeffic
ients of the linear forms are subject to optimisation and the basis functi
ons are continuous function. It is known that the objective functions in g
eneralised rational approximation problems are quasi-convex. In this paper
we also prove a stronger result\, the objective functions are pseudo-conv
ex. Then we develop numerical methods\, that are efficient for a wide rang
e of pseudo-convex functions and test them on generalised rational approxi
mation problems.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jein-Shan Chen (NTNU)
DTSTART;VALUE=DATE-TIME:20200826T070000Z
DTEND;VALUE=DATE-TIME:20200826T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/18
DESCRIPTION:Title: Two approaches for absolute value equation by using smoothing functions
\nby Jein-Shan Chen (NTNU) as part of Variational Analysis and Optimis
ation Webinar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/VAWebinar/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Björn Rüffer (University of Newcastle)
DTSTART;VALUE=DATE-TIME:20201014T060000Z
DTEND;VALUE=DATE-TIME:20201014T070000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/19
DESCRIPTION:Title: A Lyapunov perspective to projection algorithms\nby Björn Rüffer
(University of Newcastle) as part of Variational Analysis and Optimisation
Webinar\n\n\nAbstract\nThe operator theoretic point of view has been very
successful in the study of iterative splitting methods under a unified fr
amework. These algorithms include the Method of Alternating Projections as
well as the Douglas-Rachford Algorithm\, which is dual to the Alternating
Direction Method of Multipliers\, and they allow nice geometric interpret
ations. While convergence results for these algorithms have been known for
decades when problems are convex\, for non-convex problems progress on co
nvergence results has significantly increased once arguments based on Lyap
unov functions were used. In this talk we give an overview of the underlyi
ng techniques in Lyapunov's direct method and look at convergence of itera
tive projection methods through this lens.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wilfredo Sosa (UCB)
DTSTART;VALUE=DATE-TIME:20201021T060000Z
DTEND;VALUE=DATE-TIME:20201021T070000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/20
DESCRIPTION:Title: On diametrically maximal sets\, maximal premonotone maps and promonote
bifunctions\nby Wilfredo Sosa (UCB) as part of Variational Analysis an
d Optimisation Webinar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/VAWebinar/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Radek Cibulka (University of West Bohemia)
DTSTART;VALUE=DATE-TIME:20201028T060000Z
DTEND;VALUE=DATE-TIME:20201028T070000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/21
DESCRIPTION:Title: Continuous selections for inverse mappings in Banach spaces\nby Rad
ek Cibulka (University of West Bohemia) as part of Variational Analysis an
d Optimisation Webinar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/VAWebinar/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ernest Ryu (Seoul National University)
DTSTART;VALUE=DATE-TIME:20201125T060000Z
DTEND;VALUE=DATE-TIME:20201125T070000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/22
DESCRIPTION:Title: Scaled Relative Graph: Nonexpansive operators via 2D Euclidean Geometry
\nby Ernest Ryu (Seoul National University) as part of Variational Ana
lysis and Optimisation Webinar\n\n\nAbstract\nMany iterative methods in ap
plied mathematics can be thought of as fixed-point iterations\, and such a
lgorithms are usually analyzed analytically\, with inequalities. In this w
ork\, we present a geometric approach to analyzing contractive and nonexpa
nsive fixed point iterations with a new tool called the scaled relative gr
aph (SRG). The SRG provides a rigorous correspondence between nonlinear op
erators and subsets of the 2D plane. Under this framework\, a geometric ar
gument in the 2D plane becomes a rigorous proof of contractiveness of the
corresponding operator.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Vinesha Peiris (Swinburne University of Technology)
DTSTART;VALUE=DATE-TIME:20201111T060000Z
DTEND;VALUE=DATE-TIME:20201111T070000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/23
DESCRIPTION:Title: The extension of linear inequality method for generalised rational Cheb
yshev approximation\nby Vinesha Peiris (Swinburne University of Techno
logy) as part of Variational Analysis and Optimisation Webinar\n\n\nAbstra
ct\nIn this talk we will demonstrate the correspondence between the linear
inequality method developed for rational Chebyshev approximation and the
bisection method used in quasiconvex optimisation. It naturally connects r
ational and generalised rational Chebyshev approximation problems with mod
ern developments in the area of quasiconvex functions. Moreover\, the line
ar inequality method can be extended to a broader class of Chebyshev appro
ximation problems\, where the corresponding objective functions remain qua
siconvex.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chayne Planiden (University of Wollongong)
DTSTART;VALUE=DATE-TIME:20201104T060000Z
DTEND;VALUE=DATE-TIME:20201104T070000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/24
DESCRIPTION:Title: New gradient and Hessian approximation methods for derivative-free opti
misation\nby Chayne Planiden (University of Wollongong) as part of Var
iational Analysis and Optimisation Webinar\n\n\nAbstract\nIn general\, der
ivative-free optimisation (DFO) uses approximations of first- and second-o
rder information in minimisation algorithms. DFO is found in direct-search
\, model-based\, trust-region and other mainstream optimisation techniques
and is gaining popularity in recent years. This work discusses previous r
esults on some particular uses of DFO: the proximal bundle method and the
VU-algorithm\, and then presents improvements made this year on the gradie
nt and Hessian approximation techniques. These improvements can be inserte
d into any routine that requires such estimations.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Aram Arutyunov and S.E. Zhukovskiy (Moscow State Uni/ICS RAS)
DTSTART;VALUE=DATE-TIME:20201118T060000Z
DTEND;VALUE=DATE-TIME:20201118T070000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/25
DESCRIPTION:Title: Local and Global Inverse and Implicit Function Theorems\nby Aram Ar
utyunov and S.E. Zhukovskiy (Moscow State Uni/ICS RAS) as part of Variatio
nal Analysis and Optimisation Webinar\n\n\nAbstract\nIn the talk\, we pres
ent a local inverse function theorem on a cone in a neighbourhood of abnor
mal point. We present a global inverse function theorem in the form of the
orem on trivial bundle\, guaranteeing that if a smooth mapping of finite-d
imensional spaces is uniformly nonsingular\, then it has a smooth right in
verse satisfying a priori estimate. We also present a global implicit func
tion theorem guaranteeing the existence and continuity of a global implici
t function under the condition that the mappings in question are uniformly
nonsingular. The generalization of these results to the case of mappings
of Hilbert spaces and Banach spaces are discussed.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/25/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nam Ho-Nguyen (University of Sydney)
DTSTART;VALUE=DATE-TIME:20210210T000000Z
DTEND;VALUE=DATE-TIME:20210210T010000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/26
DESCRIPTION:Title: Coordinate Descent Without Coordinates: Tangent Subspace Descent on Rie
mannian Manifolds\nby Nam Ho-Nguyen (University of Sydney) as part of
Variational Analysis and Optimisation Webinar\n\n\nAbstract\nWe consider a
n extension of the coordinate descent algorithm to manifold domains\, and
provide convergence analyses for geodesically convex and non-convex smooth
objective functions. Our key insight is to draw an analogy between coordi
nate blocks in Euclidean space and tangent subspaces of a manifold. Hence\
, our method is called tangent subspace descent (TSD). The core principle
behind ensuring convergence of TSD is the appropriate choice of subspace a
t each iteration. To this end\, we propose two novel conditions: the gap e
nsuring and $C$-randomized norm conditions on deterministic and randomized
modes of subspace selection respectively. These ensure convergence for sm
ooth functions\, and are satisfied in practical contexts. We propose two s
ubspace selection rules of particular practical interest that satisfy thes
e conditions: a deterministic one for the manifold of square orthogonal ma
trices\, and a randomized one for the more general Stiefel manifold.\n(Thi
s is joint work with David Huckleberry Gutman\, Texas Tech University.)\n
LOCATION:https://researchseminars.org/talk/VAWebinar/26/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Javier Peña (Carnegie-Mellon University)
DTSTART;VALUE=DATE-TIME:20210303T000000Z
DTEND;VALUE=DATE-TIME:20210303T010000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/27
DESCRIPTION:Title: The condition number of a function relative to a set\nby Javier Pe
ña (Carnegie-Mellon University) as part of Variational Analysis and Optim
isation Webinar\n\n\nAbstract\nThe condition number of a differentiable co
nvex function\, namely the ratio of its smoothness to strong convexity con
stants\, is closely tied to fundamental properties of the function. In par
ticular\, the condition number of a quadratic convex function is the squar
e of the aspect ratio of a canonical ellipsoid associated to the function.
Furthermore\, the condition number of a function bounds the linear rate o
f convergence of the gradient descent algorithm for unconstrained convex m
inimization.\n\nWe propose a condition number of a differentiable convex f
unction relative to a reference set and distance function pair. This relat
ive condition number is defined as the ratio of a relative smoothness to a
relative strong convexity constants. We show that the relative condition
number extends the main properties of the traditional condition number bot
h in terms of its geometric insight and in terms of its role in characteri
zing the linear convergence of first-order methods for constrained convex
minimization.\n\nThis is joint work with David H. Gutman at Texas Tech Uni
versity.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Russell Luke (University of Göttingen)
DTSTART;VALUE=DATE-TIME:20210407T070000Z
DTEND;VALUE=DATE-TIME:20210407T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/28
DESCRIPTION:Title: Inconsistent Stochastic Feasibility: the Case of Stochastic Tomography<
/a>\nby Russell Luke (University of Göttingen) as part of Variational Ana
lysis and Optimisation Webinar\n\n\nAbstract\nIn an X-FEL experiment\, hig
h-energy x-ray pulses are shot with high repetition rates on a \nstream of
identical single biomolecules and the scattered photons are recorded on a
\npixelized detector. These experiments provide a new and unique route to
\nmacromolecular structure determination at room temperature\, without th
e \nneed for crystallization\, and at low material usage. The main challe
nges in \nthese experiments are the extremely low signal-to-noise ratio du
e to the very \nlow expected photon count per scattering image (10-50) and
the unknown \norientation of the molecules in each scattering image.\n\nM
athematically\, this is a stochastic computed tomography problem where the
goal \nis to reconstruct a three-dimensional object from noisy two-dimens
ional images of \na nonlinear mapping whose orientation relative to the ob
ject is both random and \nunobservable. The idea is to develop of a two-st
ep procedure for solving this problem. \nIn the first step\, we numerical
ly compute a probability distribution associated with \nthe observed patte
rns (taken together) as the stationary measure of a \nMarkov chain whose g
enerator is constructed from the individual observations. \nCorrelation in
the data and other a priori information is used to further constrain \nth
e problem and accelerate convergence to a stationary measure. With the sta
tionary \nmeasure in hand\, the second step involves solving a phase retri
eval problem \nfor the mean electron density relative to a fixed reference
orientation.\n\nThe focus of this talk is conceptual\, and involves re-en
visioning projection algorithms\nas Markov chains. We already present som
e new routes to ``old" results\, and a \nfundamental new approach to under
standing and accounting for numerical computation\non conventional compute
rs.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/28/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Huynh Van Ngai (University of Quy Nhon)
DTSTART;VALUE=DATE-TIME:20210324T060000Z
DTEND;VALUE=DATE-TIME:20210324T070000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/29
DESCRIPTION:Title: Generalized Nesterov's accelerated proximal gradient algorithms with co
nvergence rate of order $o(1/k^2)$\nby Huynh Van Ngai (University of Q
uy Nhon) as part of Variational Analysis and Optimisation Webinar\n\n\nAbs
tract\nThe accelerated gradient method initiated by Nesterov is now recogn
ized to be one of the most powerful tools for solving smooth convex optimi
zation problems. This method improves significantly the convergence rate o
f function values from $O(1/k)$ of the standard gradient method down to $O
(1/k^2).$ In this paper\, we present two generalized variants of Nesterov'
s accelerated proximal gradient method for solving composition convex opti
mization problems in which the objective function is represented by the su
m of a smooth convex function and a nonsmooth convex part. We show that wi
th suitable ways to pick the sequences of parameters\, the convergence rat
e for the function values of this proposed method is actually of order $o
(1/k^2).$ Especially\, when the objective function is $p-$uniformly convex
for $p>2\,$ the convergence rate is of order $O\\left(\\ln k/k^{2p/(p-2)}
\\right)\,$ and the convergence is linear if the objective function is str
ongly convex. By-product\, we derive a forward-backward algorithm generali
zing the one by Attouch-Peypouquet [SIAM J. Optim.\, 26(3)\, 1824-1834\, (
2016)]\, which produces a convergence sequence with a convergence rate of
the function values of order $o(1/k^2).$\n
LOCATION:https://researchseminars.org/talk/VAWebinar/29/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yboon Garcia Ramos (Universidad del Pacífico)
DTSTART;VALUE=DATE-TIME:20210331T000000Z
DTEND;VALUE=DATE-TIME:20210331T010000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/30
DESCRIPTION:Title: Characterizing quasiconvexity of the pointwise infimum of a family of
arbitrary translations of quasiconvex functions\nby Yboon Garcia Ramos
(Universidad del Pacífico) as part of Variational Analysis and Optimisat
ion Webinar\n\n\nAbstract\nIn this talk we will present some results conce
rning the problem of preserving quasiconvexity when summing up quasiconve
x functions and we will relate it to the problem of preserving quasiconvex
ity when taking the infimum of a family of quasiconvex functions. To devel
op our study\, the notion of quasiconvex family is introduced\, and we est
ablish various characterizations of such a concept.\n\nJoint work with Fab
ián Flores\, Universidad de Concepción and Nicolas Hadjisavvas\, Univers
ity of the Aegean.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/30/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ewa Bednarczuk (Warsaw University of Technology and Systems Resear
ch Institute of the PAS)
DTSTART;VALUE=DATE-TIME:20210421T070000Z
DTEND;VALUE=DATE-TIME:20210421T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/31
DESCRIPTION:Title: On duality for nonconvex minimization problems within the framework of
abstract convexity\nby Ewa Bednarczuk (Warsaw University of Technolog
y and Systems Research Institute of the PAS) as part of Variational Analys
is and Optimisation Webinar\n\n\nAbstract\nBy applying the perturbation fu
nction approach\, we propose the Lagrangian and the conjugate duals for
minimization problems of the sum of two\, generally nonconvex\, functions
. The main tool is the abstract convexity theory\, called $\\Phi$-convex
ity\, and minimax theorems for Φ\\Phi-convex functions. We provide condi
tions ensuring zero duality gap and introduce generalized Karush-Kuhn-Tuck
er conditions that characterize solutions to primal and dual problems. We
also discuss the relationship between the dual problems proposed the prese
nt investigation and some conjugate-type duals existing in the literature.
The presentation is based on joint works with Monika Syga.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/31/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Roger Behling (Fundação Getúlio Vargas)
DTSTART;VALUE=DATE-TIME:20210414T010000Z
DTEND;VALUE=DATE-TIME:20210414T020000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/32
DESCRIPTION:Title: Circumcentering projection type methods\nby Roger Behling (Fundaç
ão Getúlio Vargas) as part of Variational Analysis and Optimisation Webi
nar\n\n\nAbstract\nEnforcing successive projections\, averaging the compos
ition of reflections and barycentering projections are settled techniques
for solving convex feasibility problems. These schemes are called the meth
od of alternating projections (MAP)\, the Douglas-Rachfort method (DRM) an
d the Cimmino method (CimM)\, respectively. Recently\, we have developed t
he circumcentered-reflection method (CRM)\, whose iterations employ genera
lized circumcenters that are able to accelerate the aforementioned classic
al approaches both theoretically and numerically. In this talk\, the main
results on CRM are presented and a glimpse on future work will be provided
as well.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/32/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alexander J. Zaslavski (The Technion - Israel Institute of Technol
ogy)
DTSTART;VALUE=DATE-TIME:20210217T060000Z
DTEND;VALUE=DATE-TIME:20210217T070000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/33
DESCRIPTION:Title: Subgradient Projection Algorithm with Computational Errors\nby Alex
ander J. Zaslavski (The Technion - Israel Institute of Technology) as part
of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nWe study
the subgradient projection algorithm for minimization of convex and nonsmo
oth\nfunctions\, under the presence of computational errors. We show that
our algorithms generate a good approximate solution\, if computational err
ors are bounded from above by a small positive constant.\nMoreover\, for a
known computational error\, we find out what an approximate solution can
be obtained and how many iterates one needs for this.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/33/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yura Malitsky (Linköping University)
DTSTART;VALUE=DATE-TIME:20210519T070000Z
DTEND;VALUE=DATE-TIME:20210519T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/34
DESCRIPTION:Title: Adaptive gradient descent without descent\nby Yura Malitsky (Linkö
ping University) as part of Variational Analysis and Optimisation Webinar\
n\n\nAbstract\nIn this talk I will present some recent results for the mos
t classical optimization method — gradient descent. We will show that a
simple zero cost rule is sufficient to completely automate gradient descen
t. The method adapts to the local geometry\, with convergence guarantees d
epending only on the smoothness in a neighborhood of a solution. The prese
ntation is based on a joint work with K. Mishchenko\, see\nhttps://arxiv.o
rg/abs/1910.09529.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/34/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nguyen Duy Cuong (Federation University)
DTSTART;VALUE=DATE-TIME:20210224T060000Z
DTEND;VALUE=DATE-TIME:20210224T070000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/35
DESCRIPTION:Title: Necessary conditions for transversality properties\nby Nguyen Duy C
uong (Federation University) as part of Variational Analysis and Optimisat
ion Webinar\n\n\nAbstract\nTransversality properties of collections of set
s play an important role in optimization and variational analysis\, e.g.\,
as constraint qualifications\, qualification conditions in subdifferentia
l\, normal cone and coderivative calculus\, and convergence analysis of co
mputational algorithms. In this talk\, we present some new results on prim
al (geometric\, metric\, slope) and dual (subdifferential\, normal cone) n
ecessary (in some cases also sufficient) conditions for transversality pro
perties in both linear and nonlinear settings. Quantitative relations betw
een transversality properties and the corresponding regularity properties
of set-valued mappings are also discussed.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/35/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Lyudmila Polyakova (Saint-Petersburg State University)
DTSTART;VALUE=DATE-TIME:20210505T070000Z
DTEND;VALUE=DATE-TIME:20210505T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/36
DESCRIPTION:Title: Smooth approximations of D.C. functions\nby Lyudmila Polyakova (Sai
nt-Petersburg State University) as part of Variational Analysis and Optimi
sation Webinar\n\n\nAbstract\nAn investigation of properties of difference
of convex functions is based on the basic facts and theorems of convex an
alysis\, as the class of convex functions is one of the most investigated
among nonsmooth functions. For an arbitrary convex function a family of co
ntinuously differentiable approximations is constructed using the infimal
convolution operation. If the domain of the considered function is compact
then such smooth convex approximations are uniform in the Chebyshev metri
c. Using this technique a smooth approximation is constructed for the d.c.
functions. The optimization properties of these approximations are studie
d.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/36/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alexander Kruger (Federation University Australia)
DTSTART;VALUE=DATE-TIME:20210310T060000Z
DTEND;VALUE=DATE-TIME:20210310T070000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/37
DESCRIPTION:Title: Error bounds revisited\nby Alexander Kruger (Federation University
Australia) as part of Variational Analysis and Optimisation Webinar\n\n\nA
bstract\nWe propose a unifying general framework of quantitative primal an
d dual sufficient error bound conditions covering linear and nonlinear\, l
ocal and global settings. We expose the roles of the assumptions involved
in the error bound assertions\, in particular\, on the underlying space: g
eneral metric\, Banach or Asplund. Employing special collections of slope
operators\, we introduce a succinct form of sufficient error bound conditi
ons\, which allows one to combine in a single statement several different
assertions: nonlocal and local primal space conditions in complete metric
spaces\, and subdifferential conditions in Banach and Asplund spaces. In t
he nonlinear setting\, we cover both the conventional and the ‘alternati
ve’ error bound conditions.\n\nIt is a joint work with Nguyen Duy Cuong
(Federation University). The talk is based on the paper:\nN. D. Cuong and
A. Y. Kruger\, Error bounds revisited\, arXiv: 2012.03941 (2020).\n
LOCATION:https://researchseminars.org/talk/VAWebinar/37/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Bartl (Silesian University in Opava)
DTSTART;VALUE=DATE-TIME:20210317T060000Z
DTEND;VALUE=DATE-TIME:20210317T070000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/38
DESCRIPTION:Title: Every compact convex subset of matrices is the Clarke Jacobian of some
Lipschitzian mapping\nby David Bartl (Silesian University in Opava) as
part of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nGive
n a non-empty compact convex subset $P$ of $m \\times n$ matrices\, we sho
w constructively that there exists a Lipschitzian mapping $g\\colon {\\bf
R}^n \\to {\\bf R}^m$ such that its Clarke Jacobian $\\partial g(0) = P$.\
n
LOCATION:https://researchseminars.org/talk/VAWebinar/38/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jiri Outrata (Institute of Information Theory and Automation of th
e Czech Academy of Sciences)
DTSTART;VALUE=DATE-TIME:20210428T070000Z
DTEND;VALUE=DATE-TIME:20210428T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/39
DESCRIPTION:Title: On the solution of static contact problems with Coulomb friction via th
e semismooth* Newton method\nby Jiri Outrata (Institute of Information
Theory and Automation of the Czech Academy of Sciences) as part of Variat
ional Analysis and Optimisation Webinar\n\n\nAbstract\nThe lecture deals w
ith application of a new Newton-type method to the numerical solution of d
iscrete 3D contact problems with Coulomb friction. This method suits well
to the solution of inclusions and the resulting conceptual algorithm exhib
its\, under appropriate conditions\, the local superlinear convergence. Af
ter a description of the method a new model for the considered contact pro
blem\, amenable to the application of the new method\, will be presented.
The second part of the talk is then devoted to an efficient implementation
of the general algorithm and to numerical tests. Throughout the whole lec
ture\, various tools of modern variational analysis will be employed.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/39/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hung Phan (University of Massachusetts Lowell)
DTSTART;VALUE=DATE-TIME:20210512T010000Z
DTEND;VALUE=DATE-TIME:20210512T020000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/40
DESCRIPTION:Title: Adaptive splitting algorithms for the sum of operators\nby Hung Pha
n (University of Massachusetts Lowell) as part of Variational Analysis and
Optimisation Webinar\n\n\nAbstract\nA general optimization problem can of
ten be reduced to finding a zero of a sum of multiple (maximally) monotone
operators\, which creates challenging computational tasks as a whole. It
motivates the development of splitting algorithms in order to simplify the
computations by dealing with each operator separately\, hence the name "s
plitting". Some of the most successful splitting algorithms in application
s are the forward-backward algorithm\, the Douglas-Rachford algorithm\, an
d the alternating directions method of multipliers (ADMM). In this talk\,
we discuss some adaptive splitting algorithms for finding a zero of the su
m of operators. The main idea is to adapt the algorithm parameters to the
generalized monotonicity of the operators so that the generated sequence c
onverges to a fixed point.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/40/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Valentin Gorokhovik (Institute of Mathematics\, National Academy o
f Sciences of Belarus)
DTSTART;VALUE=DATE-TIME:20210526T070000Z
DTEND;VALUE=DATE-TIME:20210526T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/41
DESCRIPTION:Title: Abstract convexity of functions with respect to the set of Lipschitz co
ncave functions\nby Valentin Gorokhovik (Institute of Mathematics\, Na
tional Academy of Sciences of Belarus) as part of Variational Analysis and
Optimisation Webinar\n\n\nAbstract\nFor the functions defined on normed v
ector spaces we introduce the notion of the $\\mathcal{L}\\widehat{C}$-con
vexity that generalizes the classical notion of convex functions. A functi
on $f$ is called $\\mathcal{L}\\widehat{C}$-convex if it can be represente
d as the upper envelope of some subset of Lipschitz concave functions. In
the terminology of abstract convexity it means that $f$ is abstract convex
with respect to the set $\\mathcal{L}\\widehat{C}$ of Lipschitz concave f
unctions. It is proved that a function is $\\mathcal{L}\\widehat{C}$-conve
x if and only if it is lower semicontinuous and\, in addition\, it is boun
ded from below by a Lipschitz continuous function. For a function $f$ and
a point $x\\in \\mathrm{dom} \\\, f$ we introduce the notion of the $\\mat
hcal{L}\\widehat{C}$-subgradient as well as the notions of the $\\mathcal{
L}\\widehat{C}$-presubdifferential and the $\\mathcal{L}\\widehat{C}$-subd
ifferential of $f$ at $x$. We prove that for an $\\mathcal{L}\\widehat{C}$
-convex function $f$ the $\\mathcal{L}\\widehat{C}$-presubdifferential and
the $\\mathcal{L}\\widehat{C}$-subdifferential of the function $f$ are no
nempty at any point of the dense subset of $\\mathrm{dom}\\\, f$. This res
ult extends the well-known Brøndsted-Rockafellar theorem on the existence
of the Fenchel subdifferential of a convential convex function to the wid
er class of lower semicontinuous functions. As an application we derive th
e $\\mathcal{L}\\widehat{C}$-subdifferential criterium of global minimum a
nd the $\\mathcal{L}\\widehat{C}$-subdifferential necessary condition of g
lobal maximum for a nonsmooth function.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/41/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Vuong Phan (University of Southampton)
DTSTART;VALUE=DATE-TIME:20210609T070000Z
DTEND;VALUE=DATE-TIME:20210609T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/42
DESCRIPTION:by Vuong Phan (University of Southampton) as part of Variation
al Analysis and Optimisation Webinar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/VAWebinar/42/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Scott B Lindstrom (Curtin University)
DTSTART;VALUE=DATE-TIME:20210602T070000Z
DTEND;VALUE=DATE-TIME:20210602T080000Z
DTSTAMP;VALUE=DATE-TIME:20210514T191452Z
UID:VAWebinar/43
DESCRIPTION:Title: A primal/dual computable approach to improving spiraling algorithms\, b
ased on minimizing spherical surrogates for Lyapunov functions\nby Sco
tt B Lindstrom (Curtin University) as part of Variational Analysis and Op
timisation Webinar\n\n\nAbstract\nOptimization problems are frequently tac
kled by iterative application of an operator whose fixed points allow for
fast recovery of locally optimal solutions. Under light-weight assumptions
\, stability is equivalent to existence of a function---called a Lyapunov
function---that encodes structural information about both the problem and
the operator. Lyapunov functions are usually hard to find\, but if a pract
itioner had a priori knowledge---or a reasonable guess---about one's struc
ture\, they could equivalently tackle the problem by seeking to minimize t
he Lyapunov function directly. We introduce a class of methods that does t
his. Interestingly\, for certain feasibility problems\, circumcentered-ref
lection method (CRM) is an extant example therefrom. However\, CRM may not
lend itself well to primal/dual adaptation\, for reasons we show. Motivat
ed by the discovery of our new class\, we experimentally demonstrate the s
uccess of one of its other members\, implemented in a primal/dual framewor
k.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/43/
END:VEVENT
END:VCALENDAR