BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Tiến-Sơn Phạm (University of Dalat)
DTSTART:20200603T070000Z
DTEND:20200603T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/1
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/1/
 ">Openness\, Hölder metric regularity and Hölder continuity properties o
 f semialgebraic set-valued maps</a>\nby Tiến-Sơn Phạm (University of 
 Dalat) as part of Variational Analysis and Optimisation Webinar\n\n\nAbstr
 act\nGiven a semialgebraic set-valued map with closed graph\, we show that
  it is Hölder metrically subregular and that the following conditions are
  equivalent:\n\n(i) the map is an open map from its domain into its range 
 and the range of is locally closed\;\n\n(ii) the map is Hölder metrically
  regular\;\n\n(iii) the inverse map is pseudo-Hölder continuous\;\n\n(iv)
  the inverse map is lower pseudo-Hölder continuous.\n\nAn application\, v
 ia Robinson’s normal map formulation\, leads to the following result in 
 the context of semialgebraic variational inequalities: if the solution map
  (as a map of the parameter vector) is lower semicontinuous then the solut
 ion map is finite and pseudo-Holder continuous. In particular\, we obtain 
 a negative answer to a question mentioned in the paper of Dontchev and Roc
 kafellar [Characterizations of strong regularity for variational inequalit
 ies over polyhedral convex sets. SIAM J. Optim.\, 4(4):1087–1105\, 1996]
 . As a byproduct\, we show that for a (not necessarily semialgebraic) cont
 inuous single-valued map\, the openness and the non-extremality are equiva
 lent. This fact improves the main result of Pühn [Convexity and openness 
 with linear rate. J. Math. Anal. Appl.\, 227:382–395\, 1998]\, which req
 uires the convexity of the map in question.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michel Théra (University of Limoges)
DTSTART:20200617T070000Z
DTEND:20200617T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/2
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/2/
 ">Old and new results on equilibrium and quasi-equilibrium problems</a>\nb
 y Michel Théra (University of Limoges) as part of Variational Analysis an
 d Optimisation Webinar\n\n\nAbstract\nIn this talk I will briefly survey s
 ome old results which are going back to Ky Fan and Brezis-Niremberg and St
 ampacchia. Then I will give some new results related to the existence of s
 olutions to equilibrium and quasi- equilibrium problems without any convex
 ity assumption. Coverage includes some equivalences to the Ekeland variati
 onal principle for bifunctions and basic facts about transfer lower contin
 uity. An application is given to systems of quasi-equilibrium problems.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Marco A. López-Cerdá (University of Alicante)
DTSTART:20200624T070000Z
DTEND:20200624T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/3
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/3/
 ">Optimality conditions in convex semi-infinite optimization. An approach 
 based on the subdifferential of the supremum function</a>\nby Marco A. Ló
 pez-Cerdá (University of Alicante) as part of Variational Analysis and Op
 timisation Webinar\n\n\nAbstract\nWe present a survey on optimality condit
 ions (of Fritz-John and KKT-type) for semi-infinite convex optimization pr
 oblems. The methodology is based on the use of the subdifferential of the 
 supremum of the infinite family of constraint functions. Our approach aims
  to establish weak constraint qualifications and\, in the last step\, to d
 ropp out the usual continuity/closedness assumptions which are standard in
  the literature. The material in this survey is extracted  from the follow
 ing papers:\n\nR. Correa\, A. Hantoute\, M. A. López\, Weaker conditions 
 for subdifferential calculus of convex functions. J. Funct. Anal. 271 (201
 6)\, 1177-1212.\n\nR. Correa\, A. Hantoute\, M. A. López\, Moreau-Rockafe
 llar type formulas for the subdifferential of the supremum function. SIAM 
 J. Optim. 29 (2019)\, 1106-1130.\n\nR. Correa\, A. Hantoute\, M. A. López
 \, Valadier-like formulas for the supremum function II: the compactly inde
 xed case. J. Convex Anal. 26 (2019)\, 299-324.\n\nR. Correa\, A. Hantoute\
 , M. A. López\, Subdifferential of the supremum via compactification of t
 he index set. To appear in Vietnam J. Math. (2020).\n
LOCATION:https://researchseminars.org/talk/VAWebinar/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hoa Bui (Curtin University)
DTSTART:20200708T070000Z
DTEND:20200708T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/4
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/4/
 ">Zero Duality Gap Conditions via Abstract Convexity</a>\nby Hoa Bui (Curt
 in University) as part of Variational Analysis and Optimisation Webinar\n\
 n\nAbstract\nUsing tools provided by the theory of abstract convexity\, we
  extend conditions for zero duality gap to the context of nonconvex and no
 nsmooth optimization. Substituting the classical setting\, an abstract con
 vex function is the upper envelope of a subset of a family of abstract aff
 ine functions (being conventional vertical translations of the abstract li
 near functions). We establish new characterizations of the zero duality ga
 p under no assumptions on the topology on the space of abstract linear fun
 ctions. Endowing the latter space with the topology of pointwise convergen
 ce\, we extend several fundamental facts of the conventional convex analys
 is. In particular\, we prove that the zero duality gap property can be sta
 ted in terms of an inclusion involving ε-subdifferentials\, which are sho
 wn to possess a sum rule. These conditions are new even in conventional co
 nvex cases. The Banach-Alaoglu-Bourbaki theorem is extended to the space o
 f abstract linear functions. The latter result extends a fact recently est
 ablished by Borwein\, Burachik and Yao in the conventional convex case.\n\
 nThis talk is based on a joint work with Regina Burachik\, Alex Kruger and
  David Yost.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:James Saunderson (Monash University)
DTSTART:20200715T070000Z
DTEND:20200715T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/5
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/5/
 ">Lifting for simplicity: concise descriptions of convex sets</a>\nby Jame
 s Saunderson (Monash University) as part of Variational Analysis and Optim
 isation Webinar\n\n\nAbstract\nThis talk will give a selective tour throug
 h the theory and applications of lifts of convex sets. A lift of a convex 
 set is a higher-dimensional convex set that projects onto the original set
 . Many interesting convex sets have lifts that are dramatically simpler to
  describe than the original set. Finding such simple lifts has significant
  algorithmic implications\, particularly for associated optimization probl
 ems. We will consider both the classical case of polyhedral lifts\, which 
 are described by linear inequalities\, as well as spectrahedral lifts\, wh
 ich are defined by linear matrix inequalities. The tour will include discu
 ssion of ways to construct lifts\, ways to find obstructions to the existe
 nce of lifts\, and a number of interesting examples from a variety of math
 ematical contexts. (Based on joint work with H. Fawzi\, J. Gouveia\, P. Pa
 rrilo\, and R. Thomas).\n
LOCATION:https://researchseminars.org/talk/VAWebinar/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Akiko Takeda (University of Tokyo)
DTSTART:20200729T070000Z
DTEND:20200729T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/6
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/6/
 ">Deterministic and Stochastic Gradient Methods for Non-Smooth  Non-Convex
  Regularized Optimization</a>\nby Akiko Takeda (University of Tokyo) as pa
 rt of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nOur wor
 k focuses on deterministic/stochastic gradient methods for optimizing a sm
 ooth non-convex loss function with a non-smooth non-convex regularizer. Re
 search on stochastic gradient methods is quite limited\, and until recentl
 y no non-asymptotic convergence results have been reported. After showing 
 a deterministic approach\, we present simple stochastic gradient algorithm
 s\, for finite-sum and general stochastic optimization problems\, which ha
 ve superior convergence complexities compared to the current state-of-the-
 art. We also compare our algorithms’ performance in practice for empiric
 al risk minimization.\n\nThis is based on joint works with  Tianxiang Liu\
 , Ting Kei Pong  and Michael R. Metel.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Evgeni Nurminski (Far Eastern Federal University)
DTSTART:20200805T070000Z
DTEND:20200805T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/7
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/7/
 ">Practical Projection with Applications</a>\nby Evgeni Nurminski (Far Eas
 tern Federal University) as part of Variational Analysis and Optimisation 
 Webinar\n\n\nAbstract\nProjection of a point on a given set is a very comm
 on computational operation in an endless number of algorithms and applicat
 ions. However\, with exception of simplest sets it by itself is a nontrivi
 al operation which is often complicated by large dimension\, computational
  degeneracy\, nonuniqueness (even for orthogonal projection on convex sets
  in certain situations)\, and so on. This talk aims to present some practi
 cal solutions\, i.e. finite algorithms\, for projection on polyhedral sets
 \, among those: simplex\, polytopes\, polyhedron\, finite-generated cones 
 with a certain discussion of “nonlinearities”\, decomposition and para
 llel computations. We also consider the application of projection operatio
 n in linear optimization and epi-projection algorithm for convex optimizat
 ion.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Xiaoqi Yang (The Hong Kong Polytechnic University)
DTSTART:20200812T070000Z
DTEND:20200812T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/8
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/8/
 ">On error bound moduli for locally Lipschitz and regular functions</a>\nb
 y Xiaoqi Yang (The Hong Kong Polytechnic University) as part of Variationa
 l Analysis and Optimisation Webinar\n\n\nAbstract\nWe first introduce for 
 a closed and convex set two classes of subsets: the near and far ends rela
 tive to a point\, and give some full characterizations for these end sets 
 by virtue of the face theory of closed and convex sets. We provide some co
 nnections between closedness of the far (near) end and the relative contin
 uity of the gauge (cogauge) for closed and convex sets. We illustrate that
  the distance from 0 to the outer limiting subdifferential of the support 
 function of the subdifferential set\, which is essentially the distance fr
 om 0 to the end set of the subdifferential set\, is an upper estimate of t
 he local error bound modulus. This upper estimate becomes tight for a conv
 ex function under some regularity conditions. We show that the distance fr
 om 0 to the outer limiting subdifferential set of a lower C^1 function is 
 equal to the local error bound modulus.\n\n\nReferences:\nLi\, M.H.\, Meng
  K.W. and Yang X.Q.\, On far and near ends of closed and convex sets. Jour
 nal of Convex Analysis. 27 (2020) 407–421.\nLi\, M.H.\, Meng K.W. and Ya
 ng X.Q.\, On error bound moduli for locally Lipschitz and regular function
 s\, Math. Program. 171 (2018) 463–487.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Marián Fabian (Czech Academy of Sciences)
DTSTART:20200701T070000Z
DTEND:20200701T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/9
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/9/
 ">Can Pourciau’s open mapping theorem be derived from Clarke’s inverse
  mapping theorem?</a>\nby Marián Fabian (Czech Academy of Sciences) as pa
 rt of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nWe disc
 uss the possibility of deriving Pourciau’s open mapping theorem from Cla
 rke’s inverse mapping theorem. These theorems work with the Clarke gener
 alized Jacobian. In our journey\, we will face several interesting phenome
 na and pitfalls in the world of (just) 2 by 3 matrices.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Oliver Stein (Karlsruhe Institute of Technology)
DTSTART:20200722T070000Z
DTEND:20200722T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/10
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/10
 /">A general branch-and-bound framework for global multiobjective optimiza
 tion</a>\nby Oliver Stein (Karlsruhe Institute of Technology) as part of V
 ariational Analysis and Optimisation Webinar\n\n\nAbstract\nWe develop a g
 eneral framework for branch-and-bound methods in multiobjective optimizati
 on. Our focus is on natural generalizations of notions and techniques from
  the single objective case. In particular\, after the notions of upper and
  lower bounds on the globally optimal value from the single objective case
  have been transferred to upper and lower bounding sets on the set of nond
 ominated points for multiobjective programs\, we discuss several possibili
 ties for discarding tests. They compare local upper bounds of the provisio
 nal nondominated sets with relaxations of partial upper image sets\, where
  the latter can stem from ideal point estimates\, from convex relaxations\
 , or from relaxations by a reformulation-linearization technique. \n    \n
 The discussion of approximation properties of the provisional nondominated
  set leads to the suggestion for a natural selection rule along with a nat
 ural termination criterion. Finally we discuss some issues which do not oc
 cur in the single objective case and which impede some desirable convergen
 ce properties\, thus also motivating a natural generalization of the conve
 rgence concept.\n\nThis is joint work with Gabriele Eichfelder\, Peter Kir
 st\, and Laura Meng.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Christiane Tammer (Martin Luther University Halle-Wittenberg)
DTSTART:20200909T070000Z
DTEND:20200909T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/11
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/11
 /">Subdifferentials and Lipschitz properties of translation invariant func
 tionals and applications</a>\nby Christiane Tammer (Martin Luther Universi
 ty Halle-Wittenberg) as part of Variational Analysis and Optimisation Webi
 nar\n\n\nAbstract\nIn the talk\, we are dealing with translation invariant
  functionals and their application for deriving necessary conditions for m
 inimal solutions of constrained and unconstrained optimization problems wi
 th respect to general domination sets.\n\nTranslation invariant functional
 s are a natural and powerful tool for the separation of not necessarily co
 nvex sets and scalarization. There are many applications of translation in
 variant functionals in nonlinear functional analysis\, vector optimization
 \, set optimization\, optimization under uncertainty\, mathematical financ
 e as well as consumer and production theory.\n\nThe primary objective of t
 his talk is to establish formulas for basic and singular subdifferentials 
 of translation invariant functionals and to study important properties suc
 h as monotonicity\, the PSNC property\, the Lipschitz behavior\, etc. of t
 hese nonlinear functionals without assuming that the shifted set involved 
 in the definition of the functional is convex. The second objective is to 
 propose a new way to scalarize a set-valued optimization problem. It allow
 s us to study necessary conditions for minimal solutions in a very broad s
 etting in which the domination set is not necessarily convex or solid or c
 onical. The third objective is to apply our results to vector-valued appro
 ximation problems.\n\nThis is a joint work with T.Q. Bao (Northern Michiga
 n University).\n
LOCATION:https://researchseminars.org/talk/VAWebinar/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Gerd Wachsmuth (BTU)
DTSTART:20200902T070000Z
DTEND:20200902T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/12
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/12
 /">New Constraint Qualifications for Optimization Problems in Banach Space
 s based on Asymptotic KKT Conditions</a>\nby Gerd Wachsmuth (BTU) as part 
 of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nOptimizati
 on theory in Banach spaces suffers from the lack of available constraint q
 ualifications. Despite the fact that there exist only a very few constrain
 t qualifications\, they are\, in addition\, often violated even in simple 
 applications. This is very much in contrast to finite-dimensional nonlinea
 r programs\, where a large number of constraint qualifications is known. S
 ince these constraint qualifications are usually defined using the set of 
 active inequality constraints\, it is difficult to extend them to the infi
 nite-dimensional setting. One exception is a recently introduced sequentia
 l constraint qualification based on asymptotic KKT conditions. This paper 
 shows that this so-called asymptotic KKT regularity allows suitable extens
 ions to the Banach space setting in order to obtain new constraint qualifi
 cations. The relation of these new constraint qualifications to existing o
 nes is discussed in detail. Their usefulness is also shown by several exam
 ples as well as an algorithmic application to the class of augmented Lagra
 ngian methods.\n\nThis is a joint work with Christian Kanzow (Würzburg) a
 nd Patrick Mehlitz (Cottbus).\n
LOCATION:https://researchseminars.org/talk/VAWebinar/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Regina Burachik (UniSA)
DTSTART:20200923T070000Z
DTEND:20200923T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/13
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/13
 /">A Primal–Dual Penalty Method via Rounded Weighted-$L_1$ Lagrangian Du
 ality</a>\nby Regina Burachik (UniSA) as part of Variational Analysis and 
 Optimisation Webinar\n\n\nAbstract\nWe propose a new duality scheme based 
 on a sequence of smooth minorants of the weighted-$l_1$ penalty function\,
  interpreted as a parametrized sequence of augmented\nLagrangians\, to sol
 ve nonconvex constrained optimization problems. For the induced sequence o
 f dual problems\, we establish strong asymptotic duality properties. Namel
 y\, we\nshow that (i) the sequence of dual problems is convex and (ii) the
  dual values monotonically increase to the optimal primal value. We use th
 ese properties to devise a subgradient based primal–dual method\, and sh
 ow that the generated primal sequence accumulates at a solution of the ori
 ginal problem. We illustrate the performance of the new method with three 
 different types of test problems: A polynomial nonconvex problem\, large-s
 cale instances of the celebrated kissing number problem\, and the Markov
 –Dubins problem. Our numerical experiments demonstrate that\, when compa
 red with the traditional implementation of a well-known smooth solver\, ou
 r new method (using the same well-known solver in its subproblem) can find
  better quality solutions\, i.e.\, “deeper” local minima\, or solution
 s closer to the global minimum. Moreover\, our method seems to be more tim
 e efficient\, especially when the problem has a large number of constraint
 s.\n\nThis is a joint work with C. Y. Kaya (UniSA) and C. J. Price (Univer
 sity of Canterbury\, Christchurch\, New Zealand)\n
LOCATION:https://researchseminars.org/talk/VAWebinar/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Christopher Price (University of Canterbury)
DTSTART:20200916T070000Z
DTEND:20200916T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/14
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/14
 /">A direct search method for constrained optimization via the rounded $l_
 1$ penalty function.</a>\nby Christopher Price (University of Canterbury) 
 as part of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nTh
 is talk looks at the constrained optimization problem when the objective a
 nd constraints are \nLipschitz continuous black box functions.   The appro
 ach uses a sequence of smoothed and offset $\\ell_1$ penalty functions. \n
 The method generates an approximate minimizer to each penalty function\, a
 nd then adjusts the offsets and other parameters.\nThe smoothing is steadi
 ly reduced\, ultimately revealing the $\\ell_1$ exact penalty function. Th
 e method preferentially uses\na discrete quasi-Newton step\, backed up by 
 a global direction search.  \nTheoretical convergence results are given fo
 r the smooth and non-smooth cases subject to relevant conditions.  \nNumer
 ical results are presented on a variety of problems with non-smooth object
 ive or constraint functions. \nThese results show the method is effective 
 in practice.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yalçın Kaya (UniSA)
DTSTART:20200930T070000Z
DTEND:20200930T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/15
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/15
 /">Constraint Splitting and Projection Methods for Optimal Control</a>\nby
  Yalçın Kaya (UniSA) as part of Variational Analysis and Optimisation We
 binar\n\n\nAbstract\nWe consider a class of optimal control problems with 
 constrained control variable. We split the ODE constraint and the control 
 constraint of the problem so as to obtain two optimal control subproblems 
 for each of which solutions can be written simply.  Employing these simple
 r solutions as projections\, we find numerical solutions to the original p
 roblem by applying four different projection-type methods: (i) Dykstra’s
  algorithm\, (ii) the Douglas–Rachford (DR) method\, (iii) the Aragón A
 rtacho–Campoy (AAC) algorithm and (iv) the fast iterative shrinkage-thre
 sholding algorithm (FISTA).  The problem we study is posed in infinite-dim
 ensional Hilbert spaces. Behaviour of the DR and AAC algorithms are explor
 ed via numerical experiments with respect to their parameters. An error an
 alysis is also carried out numerically for a particular instance of the pr
 oblem for each of the algorithms.  This is joint work with Heinz Bauschke 
 and Regina Burachik.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hieu Thao Nguyen (TU Delft)
DTSTART:20200819T070000Z
DTEND:20200819T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/16
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/16
 /">Projection algorithms for phase retrieval with high numerical aperture<
 /a>\nby Hieu Thao Nguyen (TU Delft) as part of Variational Analysis and Op
 timisation Webinar\n\n\nAbstract\nWe develop the mathematical framework in
  which the class of projection algorithms can be applied to high numerical
  aperture (NA) phase retrieval. Within this framework we first analyze the
  basic steps of solving this problem by projection algorithms and establis
 h the closed forms of all the relevant prox-operators. We then study the g
 eometry of the high-NA phase retrieval problem and the obtained results ar
 e subsequently used to establish convergence criteria of projection algori
 thms. Making use of the vectorial point-spread-function (PSF) is\, on the 
 one hand\, the key difference between this work and the literature of phas
 e retrieval mathematics which mostly deals with the scalar PSF. The result
 s of this paper\, on the other hand\, can be viewed as extensions of those
  concerning projection methods for low-NA phase retrieval. Importantly\, t
 he improved performance of projection methods over the other classes of ph
 ase retrieval algorithms in the low-NA setting now also becomes applicable
  to the high-NA case. This is demonstrated by the accompanying numerical r
 esults which show that all available solution approaches for high-NA phase
  retrieval are outperformed by projection methods.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Reinier Diaz Millan (Deakin University)
DTSTART:20201007T060000Z
DTEND:20201007T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/17
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/17
 /">An algorithm for pseudo-monotone operators with application to rational
  approximation</a>\nby Reinier Diaz Millan (Deakin University) as part of 
 Variational Analysis and Optimisation Webinar\n\n\nAbstract\nThe motivatio
 n of this paper is the development of an optimisation method for solving o
 ptimisation problems appearing in Chebyshev rational and generalised ratio
 nal approximation problems\, where the approximations are constructed as r
 atios of linear forms (linear combination of basis functions). The coeffic
 ients of the linear forms are subject to optimisation and the basis functi
 ons are continuous function. It is known that the objective functions in g
 eneralised rational approximation problems are quasi-convex. In this paper
  we also prove a stronger result\, the objective functions are pseudo-conv
 ex. Then we develop numerical methods\, that are efficient for a wide rang
 e of pseudo-convex functions and test them on generalised rational approxi
 mation problems.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jein-Shan Chen (NTNU)
DTSTART:20200826T070000Z
DTEND:20200826T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/18
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/18
 /">Two approaches for absolute value equation by using smoothing functions
 </a>\nby Jein-Shan Chen (NTNU) as part of Variational Analysis and Optimis
 ation Webinar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/VAWebinar/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Björn Rüffer (University of Newcastle)
DTSTART:20201014T060000Z
DTEND:20201014T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/19
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/19
 /">A Lyapunov perspective to projection algorithms</a>\nby Björn Rüffer 
 (University of Newcastle) as part of Variational Analysis and Optimisation
  Webinar\n\n\nAbstract\nThe operator theoretic point of view has been very
  successful in the study of iterative splitting methods under a unified fr
 amework. These algorithms include the Method of Alternating Projections as
  well as the Douglas-Rachford Algorithm\, which is dual to the Alternating
  Direction Method of Multipliers\, and they allow nice geometric interpret
 ations. While convergence results for these algorithms have been known for
  decades when problems are convex\, for non-convex problems progress on co
 nvergence results has significantly increased once arguments based on Lyap
 unov functions were used. In this talk we give an overview of the underlyi
 ng techniques in Lyapunov's direct method and look at convergence of itera
 tive projection methods through this lens.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wilfredo Sosa (UCB)
DTSTART:20201021T060000Z
DTEND:20201021T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/20
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/20
 /">On diametrically maximal sets\, maximal premonotone maps and promonote 
 bifunctions</a>\nby Wilfredo Sosa (UCB) as part of Variational Analysis an
 d Optimisation Webinar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/VAWebinar/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Radek Cibulka (University of West Bohemia)
DTSTART:20201028T060000Z
DTEND:20201028T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/21
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/21
 /">Continuous selections for inverse mappings in Banach spaces</a>\nby Rad
 ek Cibulka (University of West Bohemia) as part of Variational Analysis an
 d Optimisation Webinar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/VAWebinar/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ernest Ryu (Seoul National University)
DTSTART:20201125T060000Z
DTEND:20201125T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/22
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/22
 /">Scaled Relative Graph: Nonexpansive operators via 2D Euclidean Geometry
 </a>\nby Ernest Ryu (Seoul National University) as part of Variational Ana
 lysis and Optimisation Webinar\n\n\nAbstract\nMany iterative methods in ap
 plied mathematics can be thought of as fixed-point iterations\, and such a
 lgorithms are usually analyzed analytically\, with inequalities. In this w
 ork\, we present a geometric approach to analyzing contractive and nonexpa
 nsive fixed point iterations with a new tool called the scaled relative gr
 aph (SRG). The SRG provides a rigorous correspondence between nonlinear op
 erators and subsets of the 2D plane. Under this framework\, a geometric ar
 gument in the 2D plane becomes a rigorous proof of contractiveness of the 
 corresponding operator.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Vinesha Peiris (Swinburne University of Technology)
DTSTART:20201111T060000Z
DTEND:20201111T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/23
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/23
 /">The extension of linear inequality method for generalised rational Cheb
 yshev approximation</a>\nby Vinesha Peiris (Swinburne University of Techno
 logy) as part of Variational Analysis and Optimisation Webinar\n\n\nAbstra
 ct\nIn this talk we will demonstrate the correspondence between the linear
  inequality method developed for rational Chebyshev approximation and the 
 bisection method used in quasiconvex optimisation. It naturally connects r
 ational and generalised rational Chebyshev approximation problems with mod
 ern developments in the area of quasiconvex functions. Moreover\, the line
 ar inequality method can be extended to a broader class of Chebyshev appro
 ximation problems\, where the corresponding objective functions remain qua
 siconvex.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chayne Planiden (University of Wollongong)
DTSTART:20201104T060000Z
DTEND:20201104T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/24
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/24
 /">New gradient and Hessian approximation methods for derivative-free opti
 misation</a>\nby Chayne Planiden (University of Wollongong) as part of Var
 iational Analysis and Optimisation Webinar\n\n\nAbstract\nIn general\, der
 ivative-free optimisation (DFO) uses approximations of first- and second-o
 rder information in minimisation algorithms. DFO is found in direct-search
 \, model-based\, trust-region and other mainstream optimisation techniques
  and is gaining popularity in recent years. This work discusses previous r
 esults on some particular uses of DFO: the proximal bundle method and the 
 VU-algorithm\, and then presents improvements made this year on the gradie
 nt and Hessian approximation techniques. These improvements can be inserte
 d into any routine that requires such estimations.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Aram Arutyunov and S.E. Zhukovskiy (Moscow State Uni/ICS RAS)
DTSTART:20201118T060000Z
DTEND:20201118T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/25
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/25
 /">Local and Global Inverse and Implicit Function Theorems</a>\nby Aram Ar
 utyunov and S.E. Zhukovskiy (Moscow State Uni/ICS RAS) as part of Variatio
 nal Analysis and Optimisation Webinar\n\n\nAbstract\nIn the talk\, we pres
 ent a local inverse function theorem on a cone in a neighbourhood of abnor
 mal point. We present a global inverse function theorem in the form of the
 orem on trivial bundle\, guaranteeing that if a smooth mapping of finite-d
 imensional spaces is uniformly nonsingular\, then it has a smooth right in
 verse satisfying a priori estimate. We also present a global implicit func
 tion theorem guaranteeing the existence and continuity of a global implici
 t function under the condition that the mappings in question are uniformly
  nonsingular. The generalization of these results to the case of mappings 
 of Hilbert spaces and Banach spaces are discussed.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/25/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nam Ho-Nguyen (University of Sydney)
DTSTART:20210210T000000Z
DTEND:20210210T010000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/26
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/26
 /">Coordinate Descent Without Coordinates: Tangent Subspace Descent on Rie
 mannian Manifolds</a>\nby Nam Ho-Nguyen (University of Sydney) as part of 
 Variational Analysis and Optimisation Webinar\n\n\nAbstract\nWe consider a
 n extension of the coordinate descent algorithm to manifold domains\, and 
 provide convergence analyses for geodesically convex and non-convex smooth
  objective functions. Our key insight is to draw an analogy between coordi
 nate blocks in Euclidean space and tangent subspaces of a manifold. Hence\
 , our method is called tangent subspace descent (TSD). The core principle 
 behind ensuring convergence of TSD is the appropriate choice of subspace a
 t each iteration. To this end\, we propose two novel conditions: the gap e
 nsuring and $C$-randomized norm conditions on deterministic and randomized
  modes of subspace selection respectively. These ensure convergence for sm
 ooth functions\, and are satisfied in practical contexts. We propose two s
 ubspace selection rules of particular practical interest that satisfy thes
 e conditions: a deterministic one for the manifold of square orthogonal ma
 trices\, and a randomized one for the more general Stiefel manifold.\n(Thi
 s is joint work with David Huckleberry Gutman\, Texas Tech University.)\n
LOCATION:https://researchseminars.org/talk/VAWebinar/26/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Javier Peña (Carnegie-Mellon University)
DTSTART:20210303T000000Z
DTEND:20210303T010000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/27
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/27
 /">The condition number of a function relative to a set</a>\nby Javier Pe
 ña (Carnegie-Mellon University) as part of Variational Analysis and Optim
 isation Webinar\n\n\nAbstract\nThe condition number of a differentiable co
 nvex function\, namely the ratio of its smoothness to strong convexity con
 stants\, is closely tied to fundamental properties of the function. In par
 ticular\, the condition number of a quadratic convex function is the squar
 e of the aspect ratio of a canonical ellipsoid associated to the function.
  Furthermore\, the condition number of a function bounds the linear rate o
 f convergence of the gradient descent algorithm for unconstrained convex m
 inimization.\n\nWe propose a condition number of a differentiable convex f
 unction relative to a reference set and distance function pair. This relat
 ive condition number is defined as the ratio of a relative smoothness to a
  relative strong convexity constants. We show that the relative condition 
 number extends the main properties of the traditional condition number bot
 h in terms of its geometric insight and in terms of its role in characteri
 zing the linear convergence of first-order methods for constrained convex 
 minimization.\n\nThis is joint work with David H. Gutman at Texas Tech Uni
 versity.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Russell Luke (University of Göttingen)
DTSTART:20210407T070000Z
DTEND:20210407T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/28
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/28
 /">Inconsistent Stochastic Feasibility: the Case of Stochastic Tomography<
 /a>\nby Russell Luke (University of Göttingen) as part of Variational Ana
 lysis and Optimisation Webinar\n\n\nAbstract\nIn an X-FEL experiment\, hig
 h-energy x-ray pulses are shot with high repetition rates on a \nstream of
  identical single biomolecules and the scattered photons are recorded on a
  \npixelized detector. These experiments provide a new and unique route to
  \nmacromolecular structure determination at room temperature\, without th
 e \nneed for crystallization\, and at low material usage.  The main challe
 nges in \nthese experiments are the extremely low signal-to-noise ratio du
 e to the very \nlow expected photon count per scattering image (10-50) and
  the unknown \norientation of the molecules in each scattering image.\n\nM
 athematically\, this is a stochastic computed tomography problem where the
  goal \nis to reconstruct a three-dimensional object from noisy two-dimens
 ional images of \na nonlinear mapping whose orientation relative to the ob
 ject is both random and \nunobservable. The idea is to develop of a two-st
 ep procedure for solving this problem.  \nIn the first step\, we numerical
 ly compute a probability distribution associated with \nthe observed patte
 rns (taken together) as the stationary measure of a \nMarkov chain whose g
 enerator is constructed from the individual observations. \nCorrelation in
  the data and other a priori information is used to further constrain \nth
 e problem and accelerate convergence to a stationary measure. With the sta
 tionary \nmeasure in hand\, the second step involves solving a phase retri
 eval problem \nfor the mean electron density relative to a fixed reference
  orientation.\n\nThe focus of this talk is conceptual\, and involves re-en
 visioning projection algorithms\nas Markov chains.  We already present som
 e new routes to ``old" results\, and a \nfundamental new approach to under
 standing and accounting for numerical computation\non conventional compute
 rs.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/28/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Huynh Van Ngai (University of Quy Nhon)
DTSTART:20210324T060000Z
DTEND:20210324T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/29
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/29
 /">Generalized Nesterov's accelerated proximal gradient algorithms with co
 nvergence rate of order $o(1/k^2)$</a>\nby Huynh Van Ngai (University of Q
 uy Nhon) as part of Variational Analysis and Optimisation Webinar\n\n\nAbs
 tract\nThe accelerated gradient method initiated by Nesterov is now recogn
 ized to be one of the most powerful tools for solving smooth convex optimi
 zation problems. This method improves significantly the convergence rate o
 f function values from $O(1/k)$ of the standard gradient method down to $O
 (1/k^2).$ In this paper\, we present two generalized variants of Nesterov'
 s accelerated proximal gradient method for solving composition convex opti
 mization problems in which the objective function is represented by the su
 m of a smooth convex function and a nonsmooth convex part. We show that wi
 th suitable ways to pick the sequences of parameters\, the convergence rat
 e for the function values of this proposed method is actually  of order $o
 (1/k^2).$ Especially\, when the objective function is $p-$uniformly convex
  for $p>2\,$ the convergence rate is of order $O\\left(\\ln k/k^{2p/(p-2)}
 \\right)\,$ and the convergence is linear if the objective function is str
 ongly convex. By-product\, we derive a forward-backward algorithm generali
 zing the one by Attouch-Peypouquet [SIAM J. Optim.\, 26(3)\, 1824-1834\, (
 2016)]\, which produces a convergence sequence with a convergence rate of 
 the function values of order $o(1/k^2).$\n
LOCATION:https://researchseminars.org/talk/VAWebinar/29/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yboon Garcia Ramos (Universidad del Pacífico)
DTSTART:20210331T000000Z
DTEND:20210331T010000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/30
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/30
 /">Characterizing quasiconvexity of the pointwise infimum of a family of  
 arbitrary translations of quasiconvex functions</a>\nby Yboon Garcia Ramos
  (Universidad del Pacífico) as part of Variational Analysis and Optimisat
 ion Webinar\n\n\nAbstract\nIn this talk we will present some results conce
 rning the problem of preserving quasiconvexity when summing up  quasiconve
 x functions and we will relate it to the problem of preserving quasiconvex
 ity when taking the infimum of a family of quasiconvex functions. To devel
 op our study\, the notion of quasiconvex family is introduced\, and we est
 ablish various characterizations of such a concept.\n\nJoint work with Fab
 ián Flores\, Universidad de Concepción and Nicolas Hadjisavvas\, Univers
 ity of the Aegean.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/30/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ewa Bednarczuk (Warsaw University of Technology and Systems Resear
 ch Institute of the PAS)
DTSTART:20210421T070000Z
DTEND:20210421T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/31
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/31
 /">On  duality for nonconvex minimization problems within the framework of
  abstract convexity</a>\nby Ewa Bednarczuk (Warsaw University of Technolog
 y and Systems Research Institute of the PAS) as part of Variational Analys
 is and Optimisation Webinar\n\n\nAbstract\nBy applying the perturbation fu
 nction approach\, we propose   the Lagrangian and the conjugate duals for 
  minimization problems of the sum of two\, generally nonconvex\, functions
 .  The main tool is the abstract convexity theory\, called  $\\Phi$-convex
 ity\, and  minimax theorems for Φ\\Phi-convex functions. We provide condi
 tions ensuring zero duality gap and introduce generalized Karush-Kuhn-Tuck
 er conditions that characterize solutions to primal and dual problems. We 
 also discuss the relationship between the dual problems proposed the prese
 nt investigation and some conjugate-type duals existing in the literature.
  The presentation is based on joint works with Monika Syga.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/31/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Roger Behling (Fundação Getúlio Vargas)
DTSTART:20210414T010000Z
DTEND:20210414T020000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/32
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/32
 /">Circumcentering projection type methods</a>\nby Roger Behling (Fundaç
 ão Getúlio Vargas) as part of Variational Analysis and Optimisation Webi
 nar\n\n\nAbstract\nEnforcing successive projections\, averaging the compos
 ition of reflections and barycentering projections are settled techniques 
 for solving convex feasibility problems. These schemes are called the meth
 od of alternating projections (MAP)\, the Douglas-Rachfort method (DRM) an
 d the Cimmino method (CimM)\, respectively. Recently\, we have developed t
 he circumcentered-reflection method (CRM)\, whose iterations employ genera
 lized circumcenters that are able to accelerate the aforementioned classic
 al approaches both theoretically and numerically. In this talk\, the main 
 results on CRM are presented and a glimpse on future work will be provided
  as well.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/32/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alexander J. Zaslavski (The Technion - Israel Institute of Technol
 ogy)
DTSTART:20210217T060000Z
DTEND:20210217T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/33
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/33
 /">Subgradient Projection Algorithm with Computational Errors</a>\nby Alex
 ander J. Zaslavski (The Technion - Israel Institute of Technology) as part
  of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nWe study 
 the subgradient projection algorithm for minimization of convex and nonsmo
 oth\nfunctions\, under the presence of computational errors. We show that 
 our algorithms generate a good approximate solution\, if computational err
 ors are bounded from above by a small positive constant.\nMoreover\, for a
  known computational error\, we find out what an approximate solution can 
 be obtained and how many iterates one needs for this.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/33/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yura Malitsky (Linköping University)
DTSTART:20210519T070000Z
DTEND:20210519T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/34
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/34
 /">Adaptive gradient descent without descent</a>\nby Yura Malitsky (Linkö
 ping University) as part of Variational Analysis and Optimisation Webinar\
 n\n\nAbstract\nIn this talk I will present some recent results for the mos
 t classical optimization method — gradient descent. We will show that a 
 simple zero cost rule is sufficient to completely automate gradient descen
 t. The method adapts to the local geometry\, with convergence guarantees d
 epending only on the smoothness in a neighborhood of a solution. The prese
 ntation is based on a joint work with K. Mishchenko\, see\nhttps://arxiv.o
 rg/abs/1910.09529.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/34/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nguyen Duy Cuong (Federation University)
DTSTART:20210224T060000Z
DTEND:20210224T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/35
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/35
 /">Necessary conditions for transversality properties</a>\nby Nguyen Duy C
 uong (Federation University) as part of Variational Analysis and Optimisat
 ion Webinar\n\n\nAbstract\nTransversality properties of collections of set
 s play an important role in optimization and variational analysis\, e.g.\,
  as constraint qualifications\, qualification conditions in subdifferentia
 l\, normal cone and coderivative calculus\, and convergence analysis of co
 mputational algorithms. In this talk\, we present some new results on prim
 al (geometric\, metric\, slope) and dual (subdifferential\, normal cone) n
 ecessary (in some cases also sufficient) conditions for transversality pro
 perties in both linear and nonlinear settings. Quantitative relations betw
 een transversality properties and the corresponding regularity properties 
 of set-valued mappings are also discussed.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/35/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Lyudmila Polyakova (Saint-Petersburg State University)
DTSTART:20210505T070000Z
DTEND:20210505T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/36
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/36
 /">Smooth approximations of D.C. functions</a>\nby Lyudmila Polyakova (Sai
 nt-Petersburg State University) as part of Variational Analysis and Optimi
 sation Webinar\n\n\nAbstract\nAn investigation of properties of difference
  of convex functions is based on the basic facts and theorems of convex an
 alysis\, as the class of convex functions is one of the most investigated 
 among nonsmooth functions. For an arbitrary convex function a family of co
 ntinuously differentiable approximations is constructed using the infimal 
 convolution operation. If the domain of the considered function is compact
  then such smooth convex approximations are uniform in the Chebyshev metri
 c. Using this technique a smooth approximation is constructed for the d.c.
  functions. The optimization properties of these approximations are studie
 d.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/36/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alexander Kruger (Federation University Australia)
DTSTART:20210310T060000Z
DTEND:20210310T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/37
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/37
 /">Error bounds revisited</a>\nby Alexander Kruger (Federation University 
 Australia) as part of Variational Analysis and Optimisation Webinar\n\n\nA
 bstract\nWe propose a unifying general framework of quantitative primal an
 d dual sufficient error bound conditions covering linear and nonlinear\, l
 ocal and global settings. We expose the roles of the assumptions involved 
 in the error bound assertions\, in particular\, on the underlying space: g
 eneral metric\, Banach or Asplund. Employing special collections of slope 
 operators\, we introduce a succinct form of sufficient error bound conditi
 ons\, which allows one to combine in a single statement several different 
 assertions: nonlocal and local primal space conditions in complete metric 
 spaces\, and subdifferential conditions in Banach and Asplund spaces. In t
 he nonlinear setting\, we cover both the conventional and the ‘alternati
 ve’ error bound conditions.\n\nIt is a joint work with Nguyen Duy Cuong 
 (Federation University). The talk is based on the paper:\nN. D. Cuong and 
 A. Y. Kruger\, Error bounds revisited\, arXiv: 2012.03941 (2020).\n
LOCATION:https://researchseminars.org/talk/VAWebinar/37/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Bartl (Silesian University in Opava)
DTSTART:20210317T060000Z
DTEND:20210317T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/38
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/38
 /">Every compact convex subset of matrices is the Clarke Jacobian of some 
 Lipschitzian mapping</a>\nby David Bartl (Silesian University in Opava) as
  part of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nGive
 n a non-empty compact convex subset $P$ of $m \\times n$ matrices\, we sho
 w constructively that there exists a Lipschitzian mapping $g\\colon {\\bf 
 R}^n \\to {\\bf R}^m$ such that its Clarke Jacobian $\\partial g(0) = P$.\
 n
LOCATION:https://researchseminars.org/talk/VAWebinar/38/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jiri Outrata (Institute of Information Theory and Automation of th
 e Czech Academy of Sciences)
DTSTART:20210428T070000Z
DTEND:20210428T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/39
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/39
 /">On the solution of static contact problems with Coulomb friction via th
 e semismooth* Newton method</a>\nby Jiri Outrata (Institute of Information
  Theory and Automation of the Czech Academy of Sciences) as part of Variat
 ional Analysis and Optimisation Webinar\n\n\nAbstract\nThe lecture deals w
 ith application of a new Newton-type method to the numerical solution of d
 iscrete 3D contact problems with Coulomb friction. This method suits well 
 to the solution of inclusions and the resulting conceptual algorithm exhib
 its\, under appropriate conditions\, the local superlinear convergence. Af
 ter a description of the method a new model for the considered contact pro
 blem\, amenable to the application of the new method\, will be presented. 
 The second part of the talk is then devoted to an efficient implementation
  of the general algorithm and to numerical tests. Throughout the whole lec
 ture\, various tools of modern variational analysis will be employed.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/39/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hung Phan (University of Massachusetts Lowell)
DTSTART:20210512T010000Z
DTEND:20210512T020000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/40
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/40
 /">Adaptive splitting algorithms for the sum of operators</a>\nby Hung Pha
 n (University of Massachusetts Lowell) as part of Variational Analysis and
  Optimisation Webinar\n\n\nAbstract\nA general optimization problem can of
 ten be reduced to finding a zero of a sum of multiple (maximally) monotone
  operators\, which creates challenging computational tasks as a whole. It 
 motivates the development of splitting algorithms in order to simplify the
  computations by dealing with each operator separately\, hence the name "s
 plitting". Some of the most successful splitting algorithms in application
 s are the forward-backward algorithm\, the Douglas-Rachford algorithm\, an
 d the alternating directions method of multipliers (ADMM). In this talk\, 
 we discuss some adaptive splitting algorithms for finding a zero of the su
 m of operators. The main idea is to adapt the algorithm parameters to the 
 generalized monotonicity of the operators so that the generated sequence c
 onverges to a fixed point.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/40/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Guoyin Li (The University of New South Wales)
DTSTART:20210526T070000Z
DTEND:20210526T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/41
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/41
 /">Proximal methods for nonsmooth and nonconvex fractional programs: when 
 sparse optimization meets fractional programs</a>\nby Guoyin Li (The Unive
 rsity of New South Wales) as part of Variational Analysis and Optimisation
  Webinar\n\n\nAbstract\nNonsmooth and nonconvex fractional programs are ub
 iquitous and also highly challenging. It includes the composite optimizati
 on problems studied extensively lately\, and encompasses many important mo
 dern optimization problems arising from diverse areas such as the recent p
 roposed scale invariant sparse signal reconstruction problem in signal pro
 cessing\, the robust Sharpe ratio optimization problems in finance and the
  sparse generalized eigenvalue problem in discrimination analysis.  In thi
 s talk\, we will introduce extrapolated proximal methods for solving nonsm
 ooth and nonconvex fractional programs and analyse their convergence behav
 iour. Interestingly\, we will show that the proposed algorithm exhibits li
 near convergence for sparse generalized eigenvalue problem with either car
 dinality regularization or sparsity constraints. This is achieved by ident
 ifying the explicit desingularization function of the Kurdyka-Lojasiewicz 
 inequality for the merit function of the fractional optimization models. F
 inally\, if time permits\, we will present some preliminary encouraging nu
 merical results for the proposed methods for sparse signal reconstruction 
 and sparse Fisher discriminant analysis.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/41/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Vuong Phan (University of Southampton)
DTSTART:20210609T070000Z
DTEND:20210609T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/42
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/42
 /">The Boosted Difference of Convex Functions Algorithm</a>\nby Vuong Phan
  (University of Southampton) as part of Variational Analysis and Optimisat
 ion Webinar\n\n\nAbstract\nWe introduce a new algorithm for solving Differ
 ence of Convex functions (DC) programming\, called Boosted Difference of C
 onvex functions Algorithm (BDCA). BDCA accelerates the convergence of the 
 classical difference of convex functions algorithm (DCA) thanks to an addi
 tional line search step. We prove that any limit point of the BDCA iterati
 ve sequence is a critical point of the problem under consideration and tha
 t the corresponding objective value is monotonically decreasing and conver
 gent. The global convergence and convergence rate of the iterations are ob
 tained under the Kurdyka Lojasiewicz property. We provide applications and
  numerical experiments for a hard problem in biochemistry and two challeng
 ing problems in machine learning\, demonstrating that BDCA outperforms DCA
 . For the biochemistry problem\, BDCA was \nve times faster than DCA\, for
  the Minimum Sum-of-Squares Clustering problem\, BDCA was on average sixte
 en times faster than DCA\, and for the Multidimensional Scaling problem\, 
 BDCA was three times faster than DCA.\n\nJoint work with Francisco J. Arag
 on Artacho (University of Alicante\, Spain).\n
LOCATION:https://researchseminars.org/talk/VAWebinar/42/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Scott  B Lindstrom (Curtin University)
DTSTART:20210602T070000Z
DTEND:20210602T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/43
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/43
 /">A primal/dual computable approach to improving spiraling algorithms\, b
 ased on minimizing spherical surrogates for Lyapunov functions</a>\nby Sco
 tt  B Lindstrom (Curtin University) as part of Variational Analysis and Op
 timisation Webinar\n\n\nAbstract\nOptimization problems are frequently tac
 kled by iterative application of an operator whose fixed points allow for 
 fast recovery of locally optimal solutions. Under light-weight assumptions
 \, stability is equivalent to existence of a function---called a Lyapunov 
 function---that encodes structural information about both the problem and 
 the operator. Lyapunov functions are usually hard to find\, but if a pract
 itioner had a priori knowledge---or a reasonable guess---about one's struc
 ture\, they could equivalently tackle the problem by seeking to minimize t
 he Lyapunov function directly. We introduce a class of methods that does t
 his. Interestingly\, for certain feasibility problems\, circumcentered-ref
 lection method (CRM) is an extant example therefrom. However\, CRM may not
  lend itself well to primal/dual adaptation\, for reasons we show. Motivat
 ed by the discovery of our new class\, we experimentally demonstrate the s
 uccess of one of its other members\, implemented in a primal/dual framewor
 k.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/43/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Adil Bagirov (Federation University)
DTSTART:20210623T070000Z
DTEND:20210623T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/44
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/44
 /">Nonsmooth DC optimization: recent developments</a>\nby Adil Bagirov (Fe
 deration University) as part of Variational Analysis and Optimisation Webi
 nar\n\n\nAbstract\nIn this talk we consider unconstrained optimization pro
 blems where the objective functions are represented as a difference of two
  convex (DC) functions. Various applications of DC optimization in machine
  learning are presented. We discuss two different approaches to design met
 hods of nonsmooth DC optimization: an approach based on the extension of b
 undle methods and an approach based on the DCA (difference of convex algor
 ithm). We also discuss numerical results obtained using these methods.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/44/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bruno F. Lourenço (Institute of Statistical Mathematics)
DTSTART:20210616T070000Z
DTEND:20210616T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/45
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/45
 /">Error bounds\, amenable cones and beyond</a>\nby Bruno F. Lourenço (In
 stitute of Statistical Mathematics) as part of Variational Analysis and Op
 timisation Webinar\n\n\nAbstract\nIn this talk we present an overview of t
 he theory of amenable cones\, facial residual functions \nand their applic
 ations to error bounds for conic linear systems. A feature of our results 
 is that no constraint qualifications are ever assumed\, so they are applic
 able  even to some problems with unfavourable theoretical properties. Time
  allowing\, we will discuss some recent findings on the geometry of amenab
 le cones and also some extensions for non-amenable cones.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/45/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrew Eberhard (RMIT University)
DTSTART:20210630T070000Z
DTEND:20210630T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/46
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/46
 /">Bridges between Discrete and Continous Optimisation in Stochastic Progr
 amming</a>\nby Andrew Eberhard (RMIT University) as part of Variational An
 alysis and Optimisation Webinar\n\n\nAbstract\nFor many years there has be
 en a divide between the theoretical under pinning\nof algorithmic analysis
  in discrete and continuous optimisation. As a case\nstudy\, stochastic op
 timisation provides a classic example. Here the\ntheoretical foundations o
 f continuous stochastic optimisation lies in the\ntheory of monotone opera
 tors\, operator splitting and nonsmooth analysis\, none\nof which appear t
 o be applicable to discrete problems. In this talks we will\ndiscuss the a
 pplication of ideas from continuous optimisation and variational\nanalysis
  to the study of progressive hedging like methods for discrete\noptimisati
 on models. The key to the success of such approaches is the\nacceptance of
  the existence of MIP and QMIP\\ solvers that can be integrated in\nto ana
 lysis as "black box solvers" that return solutions within a broader\nalgor
 ithmic analysis. Here methods more familiar to continuous optimisers and\n
 nonsmooth analysts can be used to provide proofs of convergence of both pr
 imal\nand dual methods. Unlike continuous optimisation there still exists 
 separate\nprimal and dual methods and analysis in the discrete context. We
  will discuss\nthis aspect and  some convergent modifications that yield r
 obust and effective\nversions of these methods\, long with numerical valid
 ation of their\neffectiveness.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/46/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Walaa Moursi (University of Waterloo)
DTSTART:20210915T010000Z
DTEND:20210915T020000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/47
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/47
 /">The Douglas-Rachford algorithm for solving possibly inconsistent optimi
 zation problems</a>\nby Walaa Moursi (University of Waterloo) as part of V
 ariational Analysis and Optimisation Webinar\n\n\nAbstract\nMore than 40 y
 ears ago\, Lions and Mercier introduced in a seminal paper the Douglas–R
 achford algorithm. Today\, this method is well recognized as a classical a
 nd highly successful splitting method to find minimizers of the sum of two
  (not necessarily smooth) convex functions. While the underlying theory ha
 s matured\, one case remains a mystery: the behaviour of the shadow sequen
 ce when the given functions have disjoint domains. Building on previous wo
 rk\, we establish for the first time weak and value convergence of the sha
 dow sequence generated by the Douglas–Rachford algorithm in a setting of
  unprecedented generality. The weak limit point is shown to solve the asso
 ciated normal problem which is a minimal perturbation of the original opti
 mization problem. We also present new results on the geometry of the minim
 al displacement vector.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/47/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nghia Tran (Oakland University)
DTSTART:20211027T000000Z
DTEND:20211027T010000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/48
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/48
 /">Sharp and strong minima for robust recovery</a>\nby Nghia Tran (Oakland
  University) as part of Variational Analysis and Optimisation Webinar\n\n\
 nAbstract\nIn this talk\, we show the important roles of sharp minima and 
 strong minima for robust recovery. We also obtain several characterization
 s of sharp minima for convex regularized optimization problems. Our charac
 terizations are quantitative and verifiable especially for the case of dec
 omposable norm regularized problems including sparsity\, group-sparsity\, 
 and low-rank convex problems. For group-sparsity optimization problems\, w
 e show that a unique solution is a strong solution and obtain quantitative
  characterizations for solution uniqueness.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/48/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Yost (Federation University Australia)
DTSTART:20211201T060000Z
DTEND:20211201T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/49
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/49
 /">Minimising the number of faces of a class of polytopes</a>\nby David Yo
 st (Federation University Australia) as part of Variational Analysis and O
 ptimisation Webinar\n\n\nAbstract\nPolytopes are the natural domains of ma
 ny optimisation problems. We consider a ``higher order" optimisation probl
 em\, whose domain is a class of polytopes\, asking what is the minimum num
 ber of faces (of a given dimension) for this class\, and which polytopes a
 re the minimisers. Generally we consider the class of d-dimensional polyto
 pes with V vertices\, for fixed V  and d. The corresponding maximisation p
 roblem was solved decades ago\, but serious progress on the minimisation q
 uestion has only been made in recent years.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/49/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Dominikus Noll (Institut de Mathématiques de Toulouse)
DTSTART:20211006T060000Z
DTEND:20211006T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/50
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/50
 /">Alternating projections with applications to Gerchberg-Saxton error red
 uction</a>\nby Dominikus Noll (Institut de Mathématiques de Toulouse) as 
 part of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nWe di
 scuss alternating projections between closed non-convex sets $A\,B$ in $R^
 n$ and obtain criteria for convergence when $A\,B$ do not intersect transv
 ersally. The infeasible case\, $A \\cap B = \\emptyset$\, is also addresse
 d\, and here we expect convergence toward a gap between $A\,B$. For sub-an
 alytic sets $A\,B$ sub-linear convergence rates depending on the Lojasiewi
 cz exponent of the distance function can be computed. We then present appl
 ications to the Gerchberg-Saxton error reduction algorithm\, to Cadzow's d
 enoising algorithm\, and to instances of the Gaussian EM-algorithm.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/50/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nadezda Sukhorukova (Swinburne University of Technology)
DTSTART:20211103T060000Z
DTEND:20211103T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/51
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/51
 /">Rational approximation and its role in different branches of mathematic
 s and applications</a>\nby Nadezda Sukhorukova (Swinburne University of Te
 chnology) as part of Variational Analysis and Optimisation Webinar\n\n\nAb
 stract\nRational approximation is a powerful function approximation tool. 
 Rational approximation is approximation by a ratio of two polynomials\, wh
 ose coefficients are subject to optimisation. Numerical methods for ration
 al approximation have been developed independently in different branches o
 f mathematics. In this talk\, I will present the interconnections between 
 different numerical methods developed to rational approximation. Most of t
 hem can be extended to the case of the so called generalised rational appr
 oximation where the approximation is a ration of two linear forms and the 
 basis functions are not limited to monomials. Finally\, I am going to talk
  about  real-life applications for rational and generalised rational appro
 ximation.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/51/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jane Ye (University of Victoria\, British Columbia)
DTSTART:20211110T000000Z
DTEND:20211110T010000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/52
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/52
 /">Difference of convex algorithms for bilevel programs with applications 
 in hyperparameter selection</a>\nby Jane Ye (University of Victoria\, Brit
 ish Columbia) as part of Variational Analysis and Optimisation Webinar\n\n
 \nAbstract\nA bilevel program is a sequence of two optimization problems w
 here the constraint region of the upper level problem is determined implic
 itly by the solution set to the lower level problem. In this  talk\, I wil
 l present difference of convex algorithms for solving bilevel programs in 
 which the upper level objective functions are difference of convex functio
 ns and the lower level programs are fully convex. This nontrivial class of
  bilevel programs provides a powerful modelling framework for dealing with
   applications arising from hyperparameter selection in machine learning. 
 Thanks to the full convexity of the lower level program\,  the value funct
 ion of the lower level program turns out to be convex and hence the bileve
 l program can be reformulated as a difference of convex bilevel program. W
 e propose two algorithms for solving the reformulated difference of convex
  program and show their convergence to stationary points under very mild a
 ssumptions. Finally we conduct numerical experiments to a bilevel model of
  support vector machine classification.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/52/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sidney Morris (Federation University Australia)
DTSTART:20211020T060000Z
DTEND:20211020T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/53
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/53
 /">Tweaking Ramanujan's Approximation of n!</a>\nby Sidney Morris (Federat
 ion University Australia) as part of Variational Analysis and Optimisation
  Webinar\n\n\nAbstract\nIn 1730 James Stirling\, building on the work of A
 braham de Moivre\, published what is known as Stirling's approximation of 
 n!. He gave a good formula which is asymptotic to n!. Since then hundreds 
 of papers have given alternative proofs of his result and improved upon it
 \, including notably by Burside\, Gosper\, and Mortici. However Srinivasa 
 Ramanujan gave a remarkably better asymptotic formula. Hirschhorn and Vill
 arino gave a nice proof of Ramanujan's result and an error estimate for th
 e approximation. \n\nThis century there have been several improvements of 
 Stirling's formula including by Nemes\, Windschitl\, and Chen. In this pre
 sentation it is shown \n\n(i)	how all these asymptotic results can be easi
 ly verified\; \n\n(ii)	how Hirschhorn and Villarino's argument allows a tw
 eaking of Ramanujan's result to give a better approximation\; \n\n(iii)	th
 at a new asymptotic formula can be obtained by further tweaking of Ramanuj
 an's result\;\n\n(iv)	that Chen's asymptotic formula is better than the ot
 hers mentioned here\, and the new asymptotic formula is comparable with Ch
 en's.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/53/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Maxim Dolgopolik (Institute for Problems in Mechanical Engineering
  of the Russian Academy of Sciences)
DTSTART:20210922T070000Z
DTEND:20210922T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/54
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/54
 /">DC Semidefinite Programming</a>\nby Maxim Dolgopolik (Institute for Pro
 blems in Mechanical Engineering of the Russian Academy of Sciences) as par
 t of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nDC (Diff
 erence-of-Convex) optimization has been an active area of research in nons
 mooth nonlinear optimization for over 30 years. The interest in this class
  of problems is based on the fact that one can efficiently utilize ideas a
 nd methods of convex analysis/optimization to solve DC optimization proble
 ms. The main results of DC optimization can be extended to the case of non
 linear semidefinite programming problems\, i.e. problems with matrix-value
 d constraints\, in several different ways. We will discuss two possible ge
 neralizations of the notion of DC function to the case of matrix-valued fu
 nctions and show how these generalizations lead to two different DC optimi
 zation approaches to nonlinear semidefinite programming.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/54/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rubén Campoy (University of Valencia)
DTSTART:20211013T060000Z
DTEND:20211013T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/55
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/55
 /">A product space reformulation with reduced dimension</a>\nby Rubén Cam
 poy (University of Valencia) as part of Variational Analysis and Optimisat
 ion Webinar\n\n\nAbstract\nThe product space reformulation is a powerful t
 rick when tackling monotone inclusions defined by finitely many operators 
 with splitting algorithms. This technique constructs an equivalent two-ope
 rator problem\, embedded in a product Hilbert space\, that preserves compu
 tational tractability. Each operator in the original problem requires one 
 dimension in the product space. In this talk\, we propose a new reformulat
 ion with a reduction on the dimension of the outcoming product Hilbert spa
 ce. We shall discuss the case of not necessarily convex feasibility proble
 ms. As an application\, we obtain a new parallel variant of the Douglas-Ra
 chford algorithm with a reduction in the number of variables. The computat
 ional advantage is illustrated through some numerical experiments.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/55/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Quoc Tran-Dinh (University of North Carolina)
DTSTART:20210929T010000Z
DTEND:20210929T020000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/56
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/56
 /">Randomized Douglas-Rachford Splitting Algorithms for Federated Composit
 e Optimization</a>\nby Quoc Tran-Dinh (University of North Carolina) as pa
 rt of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nIn this
  talk\, we present two randomized Douglas-Rachford splitting algorithms to
  solve a class of composite nonconvex finite-sum optimization problems ari
 sing from federated learning. Our algorithms rely on a combination of thre
 e main techniques: Douglas-Rachford splitting scheme\, randomized block-co
 ordinate technique\, and asynchronous strategy. We show that our algorithm
 s achieve the best-known communication complexity bounds under standard as
 sumptions in the nonconvex setting\, while allow one to inexactly updating
  local models with only a subset of users each round\, and handle nonsmoot
 h convex regularizers. Our second algorithm can be implemented in an async
 hronous mode using a general probabilistic model to capture different comp
 utational architectures. We illustrate our algorithms with many numerical 
 examples and show that the new algorithms have a promising performance com
 pared to common existing methods.\n\nThis talk is based on the collaborati
 on with Nhan Pham (UNC)\, Lam M. Nguyen (IBM)\,\nand Dzung Phan (IBM).\n
LOCATION:https://researchseminars.org/talk/VAWebinar/56/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Fred Roosta-Khorasani (The University of Queensland)
DTSTART:20211201T000000Z
DTEND:20211201T010000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/57
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/57
 /">A Newton-MR Algorithm with Complexity Guarantee for Non-Convex Problems
 </a>\nby Fred Roosta-Khorasani (The University of Queensland) as part of V
 ariational Analysis and Optimisation Webinar\n\n\nAbstract\nClassically\, 
 the conjugate gradient (CG) method has been the dominant solver in most in
 exact Newton-type methods for unconstrained optimization. In this talk\, w
 e consider replacing CG with the minimum residual method (MINRES)\, which 
 is often used for symmetric but possibly indefinite linear systems. We sho
 w that MINRES has an inherent ability to detect negative-curvature directi
 ons. Equipped with this advantage\, we discuss algorithms\, under the gene
 ral name of Newton-MR\, which can be used for optimization of general non-
 convex objectives\, and that come with favourable complexity guarantees. W
 e also give numerical examples demonstrating the performance of these meth
 ods for large-scale non-convex machine learning problems.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/57/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Majid Abbasov (Saint-Petersburg State University)
DTSTART:20211117T060000Z
DTEND:20211117T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/58
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/58
 /">Converting exhausters and coexhausters</a>\nby Majid Abbasov (Saint-Pet
 ersburg State University) as part of Variational Analysis and Optimisation
  Webinar\n\n\nAbstract\nExhausters and coexhausters are notions of constru
 ctive nonsmooth analysis which are used to study extremal properties of fu
 nctions. An upper exhauster (coexhauster) is used to get an approximation 
 of a considered function in the neighborhood of a point in the form of $\\
 min\\max$ of linear (affine) functions. A lower exhauster (coexhauster) is
  used to represent the approximation in the form of $\\max\\min$ of linear
  (affine) functions. Conditions for a minimum in a most simple way are exp
 ressed by means of upper exhausters and coexhausters\, while conditions fo
 r a maximum are described in terms of lower exhausters and coexhausters. T
 hus the problem of obtaining an upper exhauster or coexhauster when the lo
 wer one is given and vice verse arises. In the talk I will consider this p
 roblem and present new method for such a . Also all needed auxiliary infor
 mation will be provided.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/58/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Janosch Rieger (Monash University)
DTSTART:20220316T060000Z
DTEND:20220316T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/59
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/59
 /">Generalized Gearhart-Koshy acceleration for the Kaczmarz method</a>\nby
  Janosch Rieger (Monash University) as part of Variational Analysis and Op
 timisation Webinar\n\n\nAbstract\nThe Kaczmarz method is an iterative nume
 rical method for solving large and sparse rectangular systems of linear eq
 uations. Gearhart and Koshy have developed an acceleration technique for t
 he Kaczmarz method for homogeneous linear systems  that minimises the dist
 ance to the desired solution in the direction of a full Kaczmarz step. Mat
 thew Tam has recently generalised this acceleration technique to inhomogen
 eous linear systems.\n\nIn this talk\, I will develop this technique into 
 an acceleration scheme that minimises the Euclidean norm error over an aff
 ine subspace spanned by a number of previous iterates and one additional c
 ycle of the Kaczmarz method. The key challenge is to find a formulation in
  which all parameters of the least-squares problem defining the unique min
 imizer are known\, and to solve this problem efficiently.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/59/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shawn Wang (The University of British Columbia)
DTSTART:20220323T000000Z
DTEND:20220323T010000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/60
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/60
 /">Roots of the identity operator and proximal mappings: (classical and ph
 antom) cycles and gap vectors</a>\nby Shawn Wang (The University of Britis
 h Columbia) as part of Variational Analysis and Optimisation Webinar\n\n\n
 Abstract\nRecently\, Simons provided a lemma for a support function of a c
 losed convex set in a general Hilbert space and used it to prove the geome
 try conjecture on cycles of projections. We extend Simons's lemma to close
 d convex functions\, show its connections to Attouch-Théra duality\, and 
 use it to characterize (classical and phantom) cycles and gap vectors of p
 roximal mappings. \n\nJoint work with H. Bauschke\n
LOCATION:https://researchseminars.org/talk/VAWebinar/60/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Pham Ky Anh (Vietnam National University)
DTSTART:20220330T060000Z
DTEND:20220330T070000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/61
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/61
 /">Regularized dynamical systems associated with structured monotone inclu
 sions</a>\nby Pham Ky Anh (Vietnam National University) as part of Variati
 onal Analysis and Optimisation Webinar\n\n\nAbstract\nIn this report\, we 
 consider two dynamical systems associated with additively structured monot
 one inclusions involving a multi-valued maximally monotone operator $\\mat
 hcal{A}$ and a single-valued operator $\\mathcal{B}$ in real Hilbert space
 s.\n\nWe established strong convergence of the regularized forward-backwar
 d and regularized forward - backward–forward dynamics to an “optimal
 ” solution of the original inclusion under a weak assumption on the sing
 le-valued operator $\\mathcal{B}$.\n\nConvergence estimates are obtained i
 f the composite operator $\\mathcal{A} + \\mathcal{B}$ is maximally monoto
 ne and strongly (pseudo)monotone. Time-discretization of the corresponding
  continuous dynamics provides an iterative regularization forward-backward
  method or an iterative regularization forward-backward-forward method wit
 h relaxation parameters. Some simple numerical examples were given to illu
 strate the agreement between analytical and numerical results as well as t
 he performance of the proposed algorithms.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/61/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sorin-Mihai Grad (ENSTA Paris)
DTSTART:20220406T070000Z
DTEND:20220406T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/62
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/62
 /">Extending the proximal point algorithm beyond convexity</a>\nby Sorin-M
 ihai Grad (ENSTA Paris) as part of Variational Analysis and Optimisation W
 ebinar\n\n\nAbstract\nIntroduced in in the 1970's by Martinet for minimizi
 ng convex functions and extended shortly afterwards by Rockafellar towards
  monotone inclusion problems\, the proximal point algorithm turned out to 
 be a viable computational method for solving various classes of (structure
 d) optimization problems even beyond the convex framework. \n\nIn this tal
 k we discuss some extensions of proximal point type algorithms beyond conv
 exity. First we propose a relaxed-inertial proximal point type algorithm f
 or solving optimization problems consisting in minimizing strongly quasico
 nvex functions whose variables lie in finitely dimensional linear subspace
 s\, that can be extended to equilibrium functions involving such functions
 . \nThen we briefly discuss another generalized convexity notion for funct
 ions we called prox-convexity for which the proximity operator is single-v
 alued and firmly nonexpansive\, and see that the standard proximal point a
 lgorithm and Malitsky’s Golden Ratio Algorithm (originally proposed for 
 solving convex mixed variational inequalities) remain convergent when the 
 involved functions are taken prox-convex\, too.\n\nThe talk contains joint
  work with Felipe Lara and Raúl Tintaya Marcavillaca (both from Universit
 y of Tarapacá).\n
LOCATION:https://researchseminars.org/talk/VAWebinar/62/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andreas Löhne (Friedrich Schiller University Jena)
DTSTART:20220427T070000Z
DTEND:20220427T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/63
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/63
 /">Approximating convex bodies using multiple objective optimization</a>\n
 by Andreas Löhne (Friedrich Schiller University Jena) as part of Variatio
 nal Analysis and Optimisation Webinar\n\n\nAbstract\nThe problem to comput
 e a polyhedral outer and inner approximation of a convex body can be refor
 mulated as a problem to solve approximately a convex multiple objective op
 timization problem. This extends a previous result showing that multiple o
 bjective linear programming is equivalent to compute a $V$-representation 
 of the projection of an $H$-polyhedron. These results are also discussed w
 ith respect to duality\, solution methods and error bounds.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/63/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Héctor Ramírez (Universidad de Chile)
DTSTART:20220413T010000Z
DTEND:20220413T020000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/64
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/64
 /">Extensions of Constant Rank Qualification Constrains condition to Nonli
 near Conic Programming</a>\nby Héctor Ramírez (Universidad de Chile) as 
 part of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nWe pr
 esent new constraint qualification conditions for nonlinear conic programm
 ing that extend some of the constant rank-type conditions from nonlinear p
 rogramming. As an application of these conditions\, we provide a unified g
 lobal convergence proof of a class of algorithms to stationary points with
 out assuming neither uniqueness of the Lagrange multiplier nor boundedness
  of the Lagrange multipliers set. This class of algorithms includes\, for 
 instance\, general forms of augmented Lagrangian\, sequential quadratic pr
 ogramming\, and interior point methods. We also compare these new conditio
 ns with some of the existing ones\, including the nondegeneracy condition\
 , Robinson's constraint qualification\, and the metric subregularity const
 raint qualification. Finally\, we propose a more general and geometric app
 roach for defining a new extension of this condition to the conic context.
  The main advantage of the latter is that we are able to recast the strong
  second-order properties of the constant rank condition in a conic context
 . In particular\, we obtain a second-order necessary optimality condition 
 that is stronger than the classical one obtained under Robinson’s constr
 aint qualification\, in the sense that it holds for every Lagrange multipl
 ier\, even though our condition is independent of Robinson’s condition.\
 n
LOCATION:https://researchseminars.org/talk/VAWebinar/64/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Lars Grüne (University of Bayreuth)
DTSTART:20220504T070000Z
DTEND:20220504T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/65
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/65
 /">The turnpike property: a classical feature of optimal control problems 
 revisited</a>\nby Lars Grüne (University of Bayreuth) as part of Variatio
 nal Analysis and Optimisation Webinar\n\n\nAbstract\nThe turnpike property
  describes a particular behavior of optimal control problems that was firs
 t observed by Ramsey in the 1920s  and by von Neumann in the 1930s. Since 
 then it has found widespread attention in mathematical economics and contr
 ol theory alike. In recent  years it received renewed interest\, on the on
 e hand in optimization with  partial differential equations and on the oth
 er hand in model predictive  control (MPC)\, one of the most popular optim
 ization based control  schemes in practice.\n\nIn this talk we will first 
 give a general introduction to and a brief  history of the turnpike proper
 ty\, before we look at it from a systems  and control theoretic point of v
 iew. Particularly\, we will clarify its  relation to dissipativity\, detec
 tability\, and sensitivity properties of  optimal control problems in both
  finite and infinite dimensions. In the final part of the talk we will exp
 lain why the turnpike property is important for analyzing the performance 
 of MPC.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/65/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mareike Dressler (University of New South Wales)
DTSTART:20220511T070000Z
DTEND:20220511T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/66
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/66
 /">Algebraic Perspectives on Signomial Optimization</a>\nby Mareike Dressl
 er (University of New South Wales) as part of Variational Analysis and Opt
 imisation Webinar\n\n\nAbstract\nSignomials are obtained by generalizing p
 olynomials to allow for arbitrary real exponents. This generalization offe
 rs great expressive power\, but has historically sacrificed the organizing
  principle of “degree” that is central to polynomial optimization theo
 ry. In this talk\, I introduce the concept of signomial rings that allows 
 to reclaim that principle and explain how this leads to complete convex re
 laxation hierarchies of upper and lower bounds for signomial optimization 
 via sums of arithmetic-geometric exponentials (SAGE) nonnegativity certifi
 cates. In the first part of the talk\, I discuss the Positivstellensatz un
 derlying the lower bounds. It relies on the concept of conditional SAGE an
 d removes regularity conditions required by earlier works\, such as convex
 ity of the feasible set or Archimedeanity of its representing signomial in
 equalities. Numerical examples are provided to illustrate the performance 
 of the hierarchy on problems in chemical engineering and reaction networks
 .\n\nIn the second part\, I provide a language for and basic results in si
 gnomial moment theory that are analogous to those in the rich moment-SOS l
 iterature for polynomial optimization. That theory is used to turn (hierar
 chical) inner-approximations of signomial nonnegativity cones into (hierar
 chical) outer-approximations of the same\, which eventually yields the upp
 er bounds for signomial optimization.\n\nThis talk is based on joint work 
 with Riley Murray.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/66/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alberto De Marchi (Universität der Bundeswehr München)
DTSTART:20220525T070000Z
DTEND:20220525T080000Z
DTSTAMP:20260422T055045Z
UID:VAWebinar/67
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/VAWebinar/67
 /">Constrained Structured Optimization and Augmented Lagrangian Proximal M
 ethods</a>\nby Alberto De Marchi (Universität der Bundeswehr München) as
  part of Variational Analysis and Optimisation Webinar\n\n\nAbstract\nIn t
 his talk we discuss finite-dimensional constrained structured optimization
  problems and explore methods for their numerical solution. Featuring a co
 mposite objective function and set-membership constraints\, this problem c
 lass offers a modeling framework for a variety of applications. A general 
 and flexible algorithm is proposed that interlaces proximal methods and sa
 feguarded augmented Lagrangian schemes. We provide a theoretical character
 ization of the algorithm and its asymptotic properties\, deriving converge
 nce results for fully nonconvex problems. Adopting a proximal gradient met
 hod with an oracle as a formal tool\, it is demonstrated how the inner sub
 problems can be solved by off-the-shelf methods for composite optimization
 \, without introducing slack variables and despite the appearance of set-v
 alued projections. Illustrative examples show the versatility of constrain
 ed structured programs as a modeling tool and highlight benefits of the im
 plicit approach developed.\nA preprint paper is available at arXiv:2203.05
 276.\n
LOCATION:https://researchseminars.org/talk/VAWebinar/67/
END:VEVENT
END:VCALENDAR
