BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Alex Townsend (Cornell University)
DTSTART:20200427T200000Z
DTEND:20200427T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/1
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/1/">The ultraspherical spectral method</a>\nby Alex Townsend (Corne
 ll University) as part of CRM Applied Math Seminar\n\nLecture held in Webi
 nar.\n\nAbstract\nPseudospectral methods\, based on high degree polynomial
 s\, have spectral accuracy when solving differential equations but typical
 ly lead to dense and ill-conditioned matrices. The ultraspherical spectral
  method is a numerical technique to solve ordinary and partial differentia
 l equations\, leading to almost banded well-conditioned linear systems whi
 le maintaining spectral accuracy. In this talk\, we introduce the ultrasph
 erical spectral method and develop it into a spectral element method using
  a modification to a hierarchical Poincaré-Steklov domain decomposition m
 ethod.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Thomas Bury (McGill University)
DTSTART:20200511T200000Z
DTEND:20200511T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/2
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/2/">Detecting and distinguishing bifurcations from noisy time serie
 s data</a>\nby Thomas Bury (McGill University) as part of CRM Applied Math
  Seminar\n\nLecture held in Webinar.\n\nAbstract\nNumerous systems in the 
 natural sciences have the capacity to undergo an abrupt change in their dy
 namical behaviour as a threshold is crossed. Prominent examples include th
 e collapse of fisheries\, algal blooms and paleoclimatic transitions. Math
 ematical models reveal such transitions as the result of crossing a bifurc
 ation and help to elucidate the underlying mechanisms. However\, the numbe
 r of unknowns is often large\, making it difficult to infer where the bifu
 rcation occurs in the real system.\nIn this talk\, we will look at methods
  for detecting bifurcations using data-driven approaches. These methods ex
 ploit generic dynamical phenomena that occur prior to bifurcations\, such 
 as critical slowing down\, in order to infer their approach. We will show 
 how the power spectrum of noisy time series data provides information on t
 he type of bifurcation and validate this approach with empirical predator-
 prey experiment that undergoes a Hopf bifurcation. Finally\, we will explo
 re deep learning methods for detection of bifurcations and make comparison
  to the more traditional statistical methods in their ability to detect bi
 furcations.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bamdad Hosseini (California Institute of Technology)
DTSTART:20200622T200000Z
DTEND:20200622T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/3
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/3/">Data-driven supervised learning: Neural networks and uncertaint
 y quantification</a>\nby Bamdad Hosseini (California Institute of Technolo
 gy) as part of CRM Applied Math Seminar\n\n\nAbstract\nIn this talk I will
  discuss some ideas at the intersection of machine learning and uncertaint
 y quantification with a particular focus on data-driven methods that do no
 t require explicit knowledge of processes that generate the data.  In the 
 first half of the talk I will discuss supervised learning on Banach spaces
  for emulation of PDE based models and outline a method that combines prin
 cipal component analysis with neural network regression for mesh-independe
 nt approximation of PDE solutions.  In the second half I will take a diffe
 rent approach to supervised learning viewing it as a conditional sampling 
 problem.  I will then introduce a measure transport framework based on gen
 erative adversarial networks (GANs) for data-driven conditional sampling.
 ​\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Theodore Kolokolnikov (Dalhousie University)
DTSTART:20200629T200000Z
DTEND:20200629T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/4
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/4/">Simple agent-based models and their continuum limit</a>\nby The
 odore Kolokolnikov (Dalhousie University) as part of CRM Applied Math Semi
 nar\n\n\nAbstract\nWe discuss several very different ABM models and their 
 continuum limits.\n\nFirst\, consider the following agent-based model of c
 oronavirus spread: people move randomly and infection occurs with some non
 zero probability when an infected individual comes within a certain ``infe
 ction radius'' of a susceptible individual. The question is how the infect
 ion radius affects the reproduction number. At low infection rates\, this 
 model leads to the classical S-I-R ODE model as its continuum limit. Howev
 er higher infection rates lead to a saturation effect\, which we compute e
 xplicitly using basic probability theory. Its continuum limit It leads to 
 an S-I-R type model with a specific saturation term. We also show that thi
 s modified model gives a much better fit to the real-world data than the c
 lassical SIR model.\n\nNext\, we will look at a very simple stochastic mod
 el of bacterial aggregation which leads to a novel fourth-order nonlinear 
 PDE in its continuum limit. This PDE admits soliton-type solutions corresp
 onding to bacterial aggregation patterns\, which we explicitly construct. 
 \n\nIf time allows\, we will consider a spatial model of wealth exchange w
 hich leads to novel integro-differential equations.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Stephen Becker (University of Colorado Boulder\, USA)
DTSTART:20200921T183000Z
DTEND:20200921T193000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/5
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/5/">Algorithmic stability for generalization guarantees in machine 
 learning</a>\nby Stephen Becker (University of Colorado Boulder\, USA) as 
 part of CRM Applied Math Seminar\n\n\nAbstract\nInspired by the practical 
 success of deep learning\, the broader math community has been energized r
 ecently to find theoretical justification for these methods. There is a la
 rge amount of theory from the computer science community\, dating to the 1
 980s and earlier\, but usually the quantitative guarantees are too loose t
 o be helpful in practice\, and it is rare that theory can predict somethin
 g useful (such as what iteration to perform early-stopping in order to pre
 vent over-fitting). \nMany of these theories are less well-known inside ap
 plied math\, so we briefly review essential results before focusing on the
  notion of algorithmic stability\, popularized in the early 2000s\, which 
 is an alternative to the more mainstream VC dimension approach\, and is on
 e avenue that might give sharper theoretical guarantees. Algorithmic stabi
 lity is appealing to applied mathematicians\, and in particular analysts\,
  since a lot of the technical work is similar to analysis used for converg
 ence proofs. \nWe give an overview of the fundamental results of algorithm
 ic stability\, focusing on the stochastic gradient descent (SGD) method in
  the context of a nonconvex loss function\, and give the latest state-of-t
 he-art bounds\, including some of our own work (joint with L. Madden and E
 . Dall'Anese) which is one of the first results that suggests when to do e
 arly-stopping.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yuji Nakatsukasa (Oxford University\, UK)
DTSTART:20200928T200000Z
DTEND:20200928T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/6
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/6/">Fast and stable randomized low-rank matrix approximation</a>\nb
 y Yuji Nakatsukasa (Oxford University\, UK) as part of CRM Applied Math Se
 minar\n\n\nAbstract\nRandomized SVD has become an extremely successful app
 roach for efficiently computing a low-rank approximation of matrices. In p
 articular the paper by Halko\, Martinsson\, and Tropp (SIREV 2011) contain
 s extensive analysis\, and has made it a very popular method. The typical 
 complexity for a rank-r approximation of mxn matrices is O(mnlog n+(m+n)r^
 2) for dense matrices. The classical Nystrom method is much faster\, but o
 nly applicable to positive semidefinite matrices. This work studies a gene
 ralization of Nystrom's method applicable to general matrices\, and shows 
 that (i) it has near-optimal approximation quality comparable to competing
  methods\, (ii) the computational cost is the near-optimal O(mnlog n+r^3) 
 for dense matrices\, with small hidden constants\, and (iii) crucially\, i
 t can be implemented in a numerically stable fashion despite the presence 
 of an ill-conditioned pseudoinverse. Numerical experiments illustrate that
  generalized Nystrom can significantly outperform state-of-the-art methods
 \, especially when r>>1\, achieving up to a 10-fold speedup. The method is
  also well suited to updating and downdating the matrix.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Russell Luke (Universität Göttingen\, Germany)
DTSTART:20201005T200000Z
DTEND:20201005T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/7
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/7/">Optimization on Spheres : Models and Proximal Algorithms with C
 omputational Performance Comparisons</a>\nby David Russell Luke (Universit
 ät Göttingen\, Germany) as part of CRM Applied Math Seminar\n\n\nAbstrac
 t\nWe present a unified treatment of the abstract problem of finding the b
 est approximation between a cone and spheres in the image of affine transf
 ormations. Prominent instances of this problem are phase retrieval and sou
 rce localization. The common geometry binding these problems permits a gen
 eric application of algorithmic ideas and abstract convergence results for
  nonconvex optimization. We organize variational models for this problem i
 nto three different classes and derive the main algorithmic approaches wit
 hin these classes (13 in all). We identify the central ideas underlying th
 ese methods and provide thorough numerical benchmarks comparing their perf
 ormance on synthetic and laboratory data. The software and data of our exp
 eriments are all publicly accessible. We also introduce one new algorithm\
 , a cyclic relaxed Douglas-Rachford algorithm\, which outperforms all othe
 r algorithms by every measure: speed\, stability and accuracy. The analysi
 s of this algorithm remains open.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sheehan Olver (Imperial College London\, UK)
DTSTART:20201019T200000Z
DTEND:20201019T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/8
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/8/">Sparse Spectral Methods for Power-Law Interactions</a>\nby Shee
 han Olver (Imperial College London\, UK) as part of CRM Applied Math Semin
 ar\n\n\nAbstract\nAttractive-repulsive power law equilbriums are an import
 ant tool in modelling phenomena in collective behaviour: picture a flock o
 f birds which simultaneously group together\, but not too closely (i.e.\, 
 they practice social distancing)\, until an equilibrium distribution is re
 ached. In this talk we show that orthogonal polynomials have sparse recurr
 ence relationships for power law (Riesz) kernels. This leads to highly str
 uctured and efficiently solvable linear systems for the attractive-repulsi
 ve case with two such kernels of opposite sign\, giving an effective numer
 ical method for computing such equilibrium distributions. This links to an
 d builds on related work in logarithmic potential theory\, singular integr
 al equations\, and fractional differential equations.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Johannes Royset (Naval Postgraduate School\, USA)
DTSTART:20201026T200000Z
DTEND:20201026T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/9
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/9/">Variational Perspectives on Mathematical Optimization</a>\nby J
 ohannes Royset (Naval Postgraduate School\, USA) as part of CRM Applied Ma
 th Seminar\n\n\nAbstract\nThe mathematical tools for building optimization
  models and algorithms grow out of linear algebra\, differential calculus 
 and real analysis. However\, the needs of applications have led to a new a
 rea of mathematics that can handle systems of inequalities and functions t
 hat are neither smooth nor well-defined in a traditional sense. Variationa
 l analysis is the broad term for this area of mathematics. In this present
 ation\, we show its crucial role in the development of optimization models
  and algorithms in finite dimensions. First\, we examine variational geome
 try and definitions of normal and tangent vectors that extend the classica
 l notions for smooth manifolds. This in turn leads to subdifferentiability
 \, a wide range of calculus rules and optimality conditions for arbitrary 
 functions. Second\, we develop an approximation theory for optimization pr
 oblems that leads to consistent approximations\, error bounds and rates of
  convergence even in the nonconvex and nonsmooth setting.\n\nDr. Johannes 
 O. Royset is Professor of Operations Research at the Naval Postgraduate Sc
 hool. Dr. Royset's research focuses on formulating and solving stochastic 
 and deterministic optimization problems arising in data analytics\, sensor
  management\, and reliability engineering. He was awarded a National Resea
 rch Council postdoctoral fellowship in 2003\, a Young Investigator Award f
 rom the Air Force Office of Scientific Research in 2007\, and the Barchi P
 rize as well as the MOR Journal Award from the Military Operations Researc
 h Society in 2009. He received the Carl E. and Jessie W. Menneken Faculty 
 Award for Excellence in Scientific Research in 2010 and the Goodeve Medal 
 from the Operational Research Society in 2019. Dr. Royset was a plenary sp
 eaker at the International Conference on Stochastic Programming in 2016 an
 d at the SIAM Conference on Uncertainty Quantification in 2018. He has a D
 octor of Philosophy degree from the University of California at Berkeley (
 2002). Dr. Royset has been an associate or guest editor of Operations Rese
 arch\, Mathematical Programming\, Journal of Optimization Theory and Appli
 cations\, Journal of Convex Analysis\, Set-Valued and Variational Analysis
 \, Naval Research Logistics\, and Computational Optimization and Applicati
 ons.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Heather Harrington (Oxford University\, UK)
DTSTART:20201116T210000Z
DTEND:20201116T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/10
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/10/">Algebraic Systems Biology</a>\nby Heather Harrington (Oxford U
 niversity\, UK) as part of CRM Applied Math Seminar\n\n\nAbstract\nSignall
 ing pathways in molecular biology can be modelled by polynomial dynamical 
 systems. I will present models describing two biological systems involved 
 in development and cancer. I will overview approaches to analyse these mod
 els with data using computational algebraic geometry\, differential algebr
 a and statistics. Finally\, I will present how topological data analysis c
 an provide additional information to distinguish wild-type and mutant mole
 cules in one pathway. These case studies showcase how computational geomet
 ry\, topology and dynamics can provide new insights in the biological syst
 ems\, specifically how changes at the molecular scale (e.g. molecular muta
 tions) result in kinetic differences that are observed as phenotypic chang
 es (e.g.mutations in fruit fly wings).\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Thomas Surowiec (Philipps-Universität Marburg\, Germany)
DTSTART:20201123T210000Z
DTEND:20201123T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/11
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/11/">A Primal-Dual Algorithm for Risk Minimization in PDE-Constrain
 ed Optimization</a>\nby Thomas Surowiec (Philipps-Universität Marburg\, G
 ermany) as part of CRM Applied Math Seminar\n\n\nAbstract\nWe present an a
 lgorithm for the solution of risk-averse optimization problems. The settin
 g is sufficiently general so as to encompass both finite-dimensional and P
 DE-constrained stochastic optimization problems. Due to a lack of smoothne
 ss of many popular risk measures and non-convexity of the objective functi
 ons\, both the numerical approximation and numerical solution is a major c
 omputational challenge. The proposed algorithm addresses these issues in p
 art by making use of the favorable dual properties of coherent risk measur
 es. The algorithm itself is motivated by the classical method of multiplie
 rs and exploits recent results on epigraphical regularization of risk meas
 ures. Consequently\, the algorithm requires the solution of a sequence of 
 smooth problems using derivative-based methods. We prove convergence of th
 e algorithm in the fully continuous setting and conclude with several nume
 rical examples. The algorithm is seen to outperform a popular bundle-trust
  method and a direct smoothing-plus-continuation approach.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Damek Davis (Cornell University\, USA)
DTSTART:20201130T210000Z
DTEND:20201130T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/12
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/12/">Nonconvex Optimization for Estimation and Learning: Dynamics\,
  Conditioning\, and Nonsmoothness</a>\nby Damek Davis (Cornell University\
 , USA) as part of CRM Applied Math Seminar\n\n\nAbstract\nNonconvex optimi
 zation algorithms play a major role in solving statistical estimation and 
 learning problems. Indeed\, simple nonconvex heuristics\, such as the stoc
 hastic gradient method\, often provide satisfactory solutions in practice\
 , despite such problems being NP hard in the worst case. Key examples incl
 ude deep neural network training and signal estimation from nonlinear meas
 urements. While practical success stories are common\, strong theoretical 
 guarantees\, are rarer. The purpose of this talk is to overview a few (hig
 hly non exhaustive!) settings where rigorous performance guarantees can be
  established for nonconvex optimization\, focusing on the interplay of alg
 orithm dynamics\, problem conditioning\, and nonsmoothness.\n\nBio: Damek 
 Davis received his Ph.D. in mathematics from the University of California\
 , Los Angeles in 2015. In July 2016 he joined Cornell University's School 
 of Operations Research and Information Engineering as an Assistant Profess
 or. Damek is broadly interested in the mathematics of data science\, parti
 cularly the interplay of optimization\, signal processing\, statistics\, a
 nd machine learning. He is the recipient of several awards\, including the
  INFORMS Optimization Society Young Researchers Prize in (2019) and a Sloa
 n Research Fellowship in Mathematics (2020).\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tim Hoheisel (McGill University)
DTSTART:20210111T210000Z
DTEND:20210111T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/13
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/13/">Halting Time is Predictable for Large Models: A Universality P
 roperty and Average-case Analysis</a>\nby Tim Hoheisel (McGill University)
  as part of CRM Applied Math Seminar\n\n\nAbstract\nAverage-case analysis 
 computes the complexity of an algorithm averaged over all possible inputs.
  Compared to worst-case analysis\, it is more representative of the typica
 l behavior of an algorithm\,but remains largely unexplored in optimization
 . One difficulty is that the analysis can depend on the probability distri
 bution of the inputs to the model. However\, we show that this is not the 
 case for a class of large-scale problems trained with first-order methods 
 including random least squares and one-hidden layer neural networks with r
 andom weights.  In fact\, the halting time exhibits a universality propert
 y: it is independent of the probability distribution. With this barrier fo
 r average-case analysis removed\, we provide the first explicit average-ca
 se convergence rates showing a tighter complexity not captured by traditio
 nal worst-case analysis. Finally\, numerical simulations suggest this univ
 ersality property holds for a more general class of algorithms and problem
 s.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael P. Friedlander (University of British Columbia)
DTSTART:20210118T210000Z
DTEND:20210118T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/14
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/14/">Polar deconvolution of mixed signals</a>\nby Michael P. Friedl
 ander (University of British Columbia) as part of CRM Applied Math Seminar
 \n\n\nAbstract\nThe signal demixing problem seeks to separate the superpos
 ition of multiple signals into its constituent components.  We model the s
 uperposition process as the polar convolution of atomic sets\, which allow
 s us to use the duality of convex cones to develop an efficient two-stage 
 algorithm with sublinear iteration complexity and linear storage.  If the 
 signal measurements are random\, the polar deconvolution approach stably r
 ecovers low-complexity and mutually-incoherent signals with high probabili
 ty and with optimal sample complexity.  Numerical experiments on both real
  and synthetic data confirm the theory and efficiency of the proposed appr
 oach.  Joint work with Zhenan Fan\, Halyun Jeong\, and Babhru Joshi at the
  University of British Columbia.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nathan Kutz (University of Washington)
DTSTART:20210125T210000Z
DTEND:20210125T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/15
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/15/">Targeted use of deep learning for physics and engineering</a>\
 nby Nathan Kutz (University of Washington) as part of CRM Applied Math Sem
 inar\n\n\nAbstract\nMachine learning and artificial intelligence algorithm
 s are now being used to automate the discovery of governing physical equat
 ions and coordinate systems from measurement data alone.  However\, positi
 ng a universal physical law from data is challenging: (i) An appropriate c
 oordinate system must also be advocated and (ii) simultaneously proposing 
 an accompanying discrepancy model to account for the inevitable mismatch b
 etween theory and measurements must be considered.  Using a combination of
  deep learning and sparse regression\, specifically the sparse identificat
 ion of nonlinear dynamics (SINDy) algorithm\, we show how a robust mathema
 tical infrastructure can be formulated for simultaneously learning physics
  models and their coordinate systems.  This can be done with limited data 
 and sensors.  We demonstrate the methods on a diverse number of examples\,
  showing how data can maximally be exploited for scientific and engineerin
 g applications.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Zhaojun Bai (UC Davis)
DTSTART:20210201T210000Z
DTEND:20210201T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/16
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/16/">Rayleigh quotient optimizations and eigenvalue problems</a>\nb
 y Zhaojun Bai (UC Davis) as part of CRM Applied Math Seminar\n\n\nAbstract
 \nMany computational science and data analysis techniques lead to optimizi
 ng Rayleigh quotient (RQ) and RQ-type objective functions\, such as comput
 ing excitation states (energies) of electronic structures\, robust classif
 ication to handle uncertainty and constrained data clustering to incorpora
 te domain knowledge.  We will discuss emerging RQ optimization problems\, 
 variational principles\, and reformulations to algebraic linear and nonlin
 ear eigenvalue problems.  We will show how to exploit underlying propertie
 s of these eigenvalue problems for designing fast solvers\, and illustrate
  the efficacy of these solvers in applications.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Marwa El Halabi (MILA)
DTSTART:20210208T210000Z
DTEND:20210208T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/17
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/17/">Optimal approximation for unconstrained non-submodular minimiz
 ation</a>\nby Marwa El Halabi (MILA) as part of CRM Applied Math Seminar\n
 \n\nAbstract\nSubmodular function minimization is well studied\, and exist
 ing algorithms solve it exactly or up to arbitrary accuracy.  However\, in
  many applications\, such as structured sparse learning or batch Bayesian 
 optimization\, the objective function is not exactly submodular\, but clos
 e.  In this case\, no theoretical guarantees exist.  Indeed\, submodular m
 inimization algorithms rely on intricate connections between submodularity
  and convexity.  We show how these relations can be extended to obtain app
 roximation guarantees for minimizing non-submodular functions\, characteri
 zed by how close the function is to submodular.  We also extend this resul
 t to noisy function evaluations.  Our approximation results are the first 
 for minimizing non-submodular functions\, and are optimal\, as established
  by our matching lower bound.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Patrick Combettes (NC State)
DTSTART:20210215T210000Z
DTEND:20210215T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/18
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/18/">Perspective Functions and Applications</a>\nby Patrick Combett
 es (NC State) as part of CRM Applied Math Seminar\n\n\nAbstract\nIn this t
 alk I will discuss mathematical and computational issues pertaining to per
 spective functions\, a powerful concept that permits to extend a convex fu
 nction to a jointly convex one in terms of an additional scale variable. A
 pplications in inverse problems and statistics will be presented.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Heinz Bauschke (UBC)
DTSTART:20210222T210000Z
DTEND:20210222T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/19
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/19/">Compositions of projection mappings: fixed point sets and diff
 erence vectors</a>\nby Heinz Bauschke (UBC) as part of CRM Applied Math Se
 minar\n\n\nAbstract\nProjection operators and associated projection algori
 thms are fundamental building blocks in fixed point theory and optimizatio
 n.  In this talk\, I will survey recent results on the displacement mappin
 g of the right-shift operator and sketch a new application deepening our u
 nderstanding of the geometry of the fixed point set of the composition of 
 projection operators in Hilbert space.  Based on joint works with Salha Al
 wadani\, Julian Revalski\, and Shawn Wang.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Paul E. Hand (Northeastern University)
DTSTART:20210308T210000Z
DTEND:20210308T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/20
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/20/">Signal Recovery with Generative Priors</a>\nby Paul E. Hand (N
 ortheastern University) as part of CRM Applied Math Seminar\n\n\nAbstract\
 nRecovering images from very few measurements is an important task in imag
 ing problems.  Doing so requires assuming a model of what makes some image
 s natural.  Such a model is called an image prior.  Classical priors such 
 as sparsity have led to the speedup of Magnetic Resonance Imaging in certa
 in cases.  With the recent developments in machine learning\, neural netwo
 rks have been shown to provide efficient and effective priors for inverse 
 problems arising in imaging.  In this talk\, we will discuss the use of ne
 ural network generative models for inverse problems in imaging.  We will p
 resent a rigorous recovery guarantee at optimal sample complexity for comp
 ressed sensing and other inverse problems under a suitable random model.  
 We will see that generative models enable an efficient algorithm for phase
  retrieval from generic measurements with optimal sample complexity.  In c
 ontrast\, no efficient algorithm is known for this problem in the case of 
 sparsity priors.  We will discuss strengths\, weaknesses\, and future oppo
 rtunities of neural networks and generative models as image priors.  These
  works are in collaboration with Vladislav Voroninski\, Reinhard Heckel\, 
 Ali Ahmed\, Wen Huang\, Oscar Leong\, Jorio Cocola\, Muhammad Asim\, and M
 ax Daniels.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Clarice Poon (University of Bath)
DTSTART:20210315T200000Z
DTEND:20210315T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/21
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/21/">Off-the-grid sparse estimation</a>\nby Clarice Poon (Universit
 y of Bath) as part of CRM Applied Math Seminar\n\n\nAbstract\nThe behaviou
 r of sparse regularization using the Lasso method is well understood when 
 dealing with discretized linear models.  However\, the behaviour of Lasso
  is poor when dealing with models with very large parameter spaces and in 
 recent years\, there has been much interest in the use of "off-the-grid" 
 approaches\, using a continuous parameter space in conjunction with convex
  optimization problem over measures.  In my talk\, I will present some re
 cent results which explain the behaviour of this method in arbitrary dimen
 sions.  Some highlights include the use of the Fisher metric to study the
  performance of Blasso over general domains and the application of this fo
 r quantitative MRI.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Olga Mula (Paris Dauphine)
DTSTART:20210322T200000Z
DTEND:20210322T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/22
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/22/">Depth-Adaptive Neural Networks from the Optimal Control viewpo
 int</a>\nby Olga Mula (Paris Dauphine) as part of CRM Applied Math Seminar
 \n\n\nAbstract\nIn recent years\, deep learning has been connected with op
 timal control as a way to define a notion of a continuous underlying learn
 ing problem.  In this view\, neural networks can be interpreted as a disc
 retization of a parametric Ordinary Differential Equation which\, in the l
 imit\, defines a continuous-depth neural network.  The learning task th
 en consists in finding the best ODE parameters for the problem under consi
 deration\, and their number increases with the accuracy of the time discr
 etization.  Although important steps have been taken to realize the advant
 ages of such continuous formulations\, most current learning techniques f
 ix a discretization (i.e.~the number of layers is fixed).  In this work\, 
 we propose an iterative adaptive algorithm where we progressively refine 
 the time discretization (i.e.~we increase the number of layers).  Provided
  that certain tolerances are met across the iterations\, we prove that th
 e strategy converges to the underlying continuous problem.  One salient ad
 vantage of such a shallow-to-deep approach is that it helps to benefit in
  practice from the higher approximation properties of deep networks by mit
 igating over-parametrization issues.  The performance of the approach is i
 llustrated in several numerical examples.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sasha Aravkin (University of Washington)
DTSTART:20210412T200000Z
DTEND:20210412T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/23
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/23/">A tale of two models for Covid-19 scenarios</a>\nby Sasha Arav
 kin (University of Washington) as part of CRM Applied Math Seminar\n\n\nAb
 stract\nCovid-19 Pandemic is a defining global health event in the 21st ce
 ntury.  Forecasting the evolution of the pandemic is a key problem for any
 one trying to plan ahead.  Since March 2020\, IHME has been generating Cov
 id-19 scenarios\, first for US states and then for all Admin-1 locations a
 round the world.  These scenarios have been intensively used\; results are
  uploaded weekly to an interactive website: https://covid19.healthdata.org
 / \nIn this talk\, we describe two core mathematical models underlying the
  IHME scenarios.  The first model\, dubbed CurveFit\, used strong assumpti
 ons to get useful predictions using extremely limited data\, and was used 
 during March and April of 2020.  The second model\, a data-driven SEIIR mo
 del\, was put in play in June 2020\, and provides a flexible way to incorp
 orate relationships with key drivers such as mobility\, mask use\, and pne
 umonia seasonality.  We describe the mathematics underlying both models\, 
 and discuss the interplay between stability\, scalability\, and complexity
  in mathematical modeling.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:test
DTSTART:20210614T200000Z
DTEND:20210614T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/24
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/24/">test</a>\nby test as part of CRM Applied Math Seminar\n\n\nAbs
 tract\nWe show that intertwining operators for the discrete Fourier transf
 orm form a cubic algebra $C_q$ with $q$ a root of unity. This algebra is i
 ntimately related to the two other well-known\nrealizations of the cubic a
 lgebra: the Askey-Wilson algebra and the Askey-Wilson-Heun algebra.\nThis 
 is joint work with Mesuma Atakishiyeva (Universidad Autónoma del Estado d
 e Morelos\,\nCentro de Investigación en Ciencias\, Cuernavaca\, 62250\, M
 orelos\, México) and Alexei Zhedanov (School of Mathematics\, Renmin Univ
 ersity of China\, Beijing 100872\, China) curious theorem on S-integrables
  D$\\Delta$Es and its consequences\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Daniel Kuhn (EPFL)
DTSTART:20210913T183000Z
DTEND:20210913T193000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/26
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/26/">Mathematical Foundations of Robust and Distributionally Robust
  Optimization</a>\nby Daniel Kuhn (EPFL) as part of CRM Applied Math Semin
 ar\n\n\nAbstract\nRobust and distributionally robust optimization are mode
 ling paradigms for decision-making under uncertainty where the uncertain p
 arameters are only known to reside in an uncertainty set or are governed b
 y any probability distribution from within an ambiguity set\, respectively
 \, and a decision is sought that minimizes a cost function under the most 
 adverse outcome of the uncertainty.  In this paper\, we develop a rigorous
  and general theory of robust and distributionally robust nonlinear optimi
 zation using the language of convex analysis.  Our framework is based on a
  generalized `primal-worst-equals-dual-best' principle that establishes st
 rong duality between a semi-infinite primal worst and a non-convex dual be
 st formulation\, both of which admit finite convex reformulations.  This p
 rinciple offers an alternative formulation for robust optimization problem
 s that may be computationally advantageous\, and it obviates the need to m
 obilize the machinery of abstract semi-infinite duality theory to prove st
 rong duality in distributionally robust optimization.  We illustrate the m
 odeling power of our approach through convex reformulations for distributi
 onally robust optimization problems whose ambiguity sets are defined throu
 gh general optimal transport distances\, which generalize earlier results 
 for Wasserstein ambiguity sets.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/26/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Diane Guignard (University of Ottawa)
DTSTART:20210920T200000Z
DTEND:20210920T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/27
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/27/">Nonlinear reduced models for parametric PDEs</a>\nby Diane Gui
 gnard (University of Ottawa) as part of CRM Applied Math Seminar\n\n\nAbst
 ract\nWe consider model reduction methods for parametric partial different
 ial equations.  The usual approach to model reduction is to construct a lo
 w dimensional linear space which accurately approximates the parameter-to-
 solution map\, and use it to build an efficient forward solver.  However\,
  the construction of a suitable linear space is not always feasible numeri
 cally.  It is well-known that nonlinear methods may provide improved effic
 iency.  In a so-called library approximation\, the idea is to replace the 
 linear space by a collection of linear (or affine) spaces of smaller dimen
 sion.  In this talk\, we first review standard linear methods for model re
 duction.  Then\, we present a strategy which can be used to generate a non
 linear reduced model\, namely a library based on piecewise (Taylor) polyno
 mials.  We provide an analysis of the method\, in particular the derivatio
 n of an upper bound on the size of the library\, and illustrate its perfor
 mance through several numerical experiments.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Gitta Kutyniok (LMU Munich)
DTSTART:20210927T183000Z
DTEND:20210927T193000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/28
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/28/">The Modern Mathematics of Deep Learning</a>\nby Gitta Kutyniok
  (LMU Munich) as part of CRM Applied Math Seminar\n\n\nAbstract\nDespite t
 he outstanding success of deep neural networks in real-world applications\
 , ranging from science to public life\, most of the related research is em
 pirically driven and a comprehensive mathematical foundation is still miss
 ing.  At the same time\, these methods have already shown their impressive
  potential in mathematical research areas such as imaging sciences\, inver
 se problems\, or numerical analysis of partial differential equations\, so
 metimes by far outperforming classical mathematical approaches for particu
 lar problem classes.  The goal of this lecture is to first provide an intr
 oduction into this new vibrant research area.  We will then survey recent 
 advances in two directions\, namely the development of a mathematical foun
 dation of deep learning and the introduction of novel deep learning-based 
 approaches to mathematical problem settings.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/28/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jason Bramburger (George Mason University)
DTSTART:20211004T200000Z
DTEND:20211004T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/29
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/29/">Deep learning of conjugate mappings</a>\nby Jason Bramburger (
 George Mason University) as part of CRM Applied Math Seminar\n\n\nAbstract
 \nDespite many of the most common chaotic dynamical systems being continuo
 us in time\, it is through discrete time mappings that much of the underst
 anding of chaos is formed. Henri Poincaré first made this connection by t
 racking consecutive iterations of the continuous flow with a lower-dimensi
 onal\, transverse subspace. The mapping that iterates the dynamics through
  consecutive intersections of the flow with the subspace is now referred t
 o as a Poincaré map\, and it is the primary method available for interpre
 ting and classifying chaotic dynamics. Unfortunately\, in all but the simp
 lest systems\, an explicit form for such a mapping remains outstanding. In
  this talk I present a method of discovering explicit Poincaré mappings u
 sing deep learning to construct an invertible coordinate transformation in
 to a conjugate representation where the dynamics are governed by a relativ
 ely simple chaotic mapping. The invertible change of variable is based on 
 an autoencoder\, which allows for dimensionality reduction\, and has the a
 dvantage of classifying chaotic systems using the equivalence relation of 
 topological conjugacies. We illustrate with low-dimensional systems such a
 s the Rössler and Lorenz systems\, while also demonstrating the utility o
 f the method on the infinite-dimensional Kuramoto--Sivashinsky equation.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/29/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Terry Rockafellar (University of Washington)
DTSTART:20211018T200000Z
DTEND:20211018T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/30
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/30/">Hidden convexity in nonconvex optimization</a>\nby Terry Rocka
 fellar (University of Washington) as part of CRM Applied Math Seminar\n\n\
 nAbstract\nIn nonconvex optimization\, not only the objective but even the
  feasible set may lack convexity.  It may seem therefore that the concepts
  and methodology of convex optimization can no longer have a fundamental r
 ole\, but this is actually wrong.  Standard sufficient conditions for loca
 l optimality in nonlinear programming and its extensions turn out to corre
 spond to characterizing optimality in terms of a local convex-concave-type
  saddle point of an augmented Lagrangian function.  Algorithms that effect
 ively in both primal and dual elements are thereby revealed as working jus
 t as they would in the convex case.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/30/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Aaron Berk (University of British Columbia)
DTSTART:20211025T200000Z
DTEND:20211025T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/31
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/31/">On LASSO parameter sensitivity</a>\nby Aaron Berk (University 
 of British Columbia) as part of CRM Applied Math Seminar\n\n\nAbstract\nCo
 mpressed sensing theory explains why LASSO programs recover structured hig
 h-dimensional signals with minimax order-optimal error.  Yet\, the optimal
  choice of the program's governing parameter is often unknown in practice.
   It is still unclear how variation of the governing parameter impacts rec
 overy error in compressed sensing\, which is otherwise provably stable and
  robust.  We provide an overview of parameter sensitivity in LASSO program
 s in the setting of proximal denoising\; and of compressed sensing with su
 bgaussian measurement matrices and gaussian noise.  We demonstrate how two
  popular ell-1 minimization programs exhibit sensitivity with respect to t
 heir parameter choice and illustrate the theory with numerical simulations
 .  For example\, a 1% error in the estimate of a parameter can cause the e
 rror to increase by a factor of 10^9\, while choosing a different LASSO pr
 ogram avoids such sensitivity issues.  We hope that revealing parameter se
 nsitivity regimes of LASSO programs helps to inform a practitioner's choic
 e.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/31/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Frank E. Curtis (Lehigh University)
DTSTART:20211101T200000Z
DTEND:20211101T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/32
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/32/">Algorithms for Deterministically Constrained Stochastic Opt
 imization</a>\nby Frank E. Curtis (Lehigh University) as part of CRM Appli
 ed Math Seminar\n\n\nAbstract\nI will present the recent work by my resear
 ch group on the design\, analysis\, and implementation of algorithms for s
 olving nonlinear optimization problems that involve a stochastic objecti
 ve function and deterministic constraints.  The talk will focus on our s
 equential quadratic optimization (commonly known as SQP) methods for cases
  when the constraints are defined by nonlinear systems of equations\, wh
 ich arise in various applications including optimal control\, PDE-constrai
 ned optimization\, and network optimization problems.  One might also co
 nsider our techniques for training machine learning (e.g.\, deep learning)
  models with constraints.  I will also discuss the various extensions th
 at my group is exploring along with other related open questions.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/32/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rainer Groh (Bristol Composites Institute (ACCIS))
DTSTART:20211108T210000Z
DTEND:20211108T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/33
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/33/">Experimental continuation of nonlinear load-bearing structures
 </a>\nby Rainer Groh (Bristol Composites Institute (ACCIS)) as part of CRM
  Applied Math Seminar\n\n\nAbstract\nThe drive for lightweighting in struc
 tural engineering leads to ever thinner structures that deform in nonlinea
 r ways and that are prone to sudden instabilities.  Simultaneously\, a ren
 ewed interest in structural instability revolves around purposefully embed
 ding instabilities in structures to add functionality beyond structural lo
 ad-carrying capability (e.g.  dynamic shape adaptivity).  To date\, the de
 sign of nonlinear structures is guided almost entirely by computational mo
 delling\, in particular the use of numerical continuation tools.  Advances
  in experimental testing of nonlinear structures\, on the other hand\, are
  significantly lagging behind numerical methods.  While numerical continua
 tion principles such as path-following\, calculation of bifurcations\, bra
 nch-switching\, and bifurcation tracking are now well established\, nonlin
 ear experimental methods of structures have not advanced beyond simple dis
 placement and force control.  This means that the nonlinear response of ev
 en simple nonlinear structures cannot be fully characterised\, as establis
 hed techniques induce dynamic snaps at limit points and subcritical bifurc
 ations.  There is thus huge potential for devising novel and non-destructi
 ve ways of testing nonlinear structures by applying concepts from the fiel
 d of continuation to experimental mechanics.  At the University of Bristol
 \, we have developed a testing method based on adding control points with 
 auxiliary sensors and actuators to a structure to: (i) stabilise otherwise
  unstable equilibria\; (ii) control the shape of the structure to transiti
 on between different stable equilibria\; and (iii) compute an experimental
  â€œtangentialâ€ stiffness matrix (the Jacobian)\, which ultimate
 ly allows Newton's root-finding algorithm to be implemented experimentally
 .  With this approach all the features of the numerical techniques mention
 ed above can (theoretically) be replicated.  The testing method has been a
 pplied to laboratory scale test specimens such as the snap-through of a sh
 allow arch\, and this seminar will provide an overview of the mathematical
  background to experimental continuation\, its application\, and outlook t
 o future experiments.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/33/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wolfgang Dahmen (University of South Carolina)
DTSTART:20211115T210000Z
DTEND:20211115T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/34
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/34/">Some Thoughts on Physics Informed Neural Networks</a>\nby Wolf
 gang Dahmen (University of South Carolina) as part of CRM Applied Math Sem
 inar\n\n\nAbstract\nEmploying Deep Learning concepts to "learn" physical l
 aws\, has been recently attracting significant attention. In particular\, 
 so called "Physics Informed Neural Networks” (PINN) refers to a paradigm
  where the training of model surrogates is based on empirical risks that r
 equire only point-wise evaluation of residuals. This avoids the expensive 
 computation of a sufficiently large number of training data\, typically gi
 ven in terms of high fidelity approximations of model states. \n\nThe core
  issue addressed in this talk is the prediction capability of such methods
  for models given in terms of parameter-dependent families of partial diff
 erential equations. Related specific questions concern\, for instance\, th
 e choice of "variationally correct" training risks\, that convey certifiab
 le information about the achieved accuracy in problem relevant metrics\, t
 he role of a priori versus a posteriori error bounds\, connections with Ge
 nerative Adversarial Networks\, as well as related implications on trainin
 g strategies and network adaptation.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/34/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Soizic Terrien (CNRS - Université du Mans)
DTSTART:20211122T210000Z
DTEND:20211122T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/35
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/35/">Equidistant and non equidistant pulsing patterns in an excitab
 le microlaser with delayed feedback</a>\nby Soizic Terrien (CNRS - Univers
 ité du Mans) as part of CRM Applied Math Seminar\n\n\nAbstract\nExcitabil
 ity is observed in many natural and artificial systems\, from spiking neur
 ons to cardiac cells and semiconductor lasers.  It corresponds to the all-
 or-none pulse-shaped response of a system to an external perturbation\, de
 pending whether or not the perturbation amplitude exceeds the so-called ex
 citable threshold.  When subject to delayed feedback\, an excitable system
  can regenerate its own excitable response when it is reinjected after a d
 elay time τ.  As the process repeats\, this results in sustained pulsing 
 regimes\, which can be of interest for many applications\, from data trans
 mission to all-optical signal processing or neuromorphic photonic networks
 .  \n\nHere we investigate the short-term and long-term dynamics of an exc
 itable microlaser subject to delayed optical feedback.  This is done both 
 experimentally and numerically through a bifurcation analysis of a suitabl
 e model written in the form of three delay-differential equations (DDEs) w
 ith one fast and two slow variables.  We show that almost any pulse sequen
 ce can be excited and regenerated by the system over short periods of time
 .  In the long-term\, on the other hand\, the system settles down to one o
 f the coexisting\, slowly-attracting periodic orbits\, which correspond to
  different numbers of pulses in the feedback cavity.  We show that\, depen
 ding on the internal timescales of the excitable system\, these pulses app
 ear to be either equidistant (i.e.  with equalized pulse intervals) or non
 -equidistant in the feedback cavity.  A bifurcation analysis demonstrates 
 that non-equidistant pulsing patterns originate in resonance phenomena.  T
 he mechanism for the emergence of very large locking regions in the parame
 ter space in investigated.  \n\nJoint work with Bernd Krauskopf (Universit
 y of Auckland)\, Neil Broderick (University of Auckland) and Sylvain Barba
 y (C2N\, CNRS / Univ.  Paris Saclay)\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/35/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrus Giraldo (The University of Auckland)
DTSTART:20211129T210000Z
DTEND:20211129T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/36
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/36/">Degenerate singular cycles and chaotic switching in the two-si
 te open Bose--Hubbard model</a>\nby Andrus Giraldo (The University of Auck
 land) as part of CRM Applied Math Seminar\n\n\nAbstract\nThe two-site open
  Bose--Hubbard dimer model is a celebrated fundamental quantum optical mod
 el that accounts for the dynamics of bosons at two lossy interacting sites
 . Recently\, two coupled\, driven\, and lossy photonic crystal nanocavitie
 s ---which are optical devices that operate with only a few hundred photon
 s due to their extremely small size--- have been shown to realise this mod
 el experimentally. Thus\, there is much interest in understanding the diff
 erent behaviours that such model exhibits for theoretical and practical pu
 rposes.\nThis talk will show the different dynamics in the semiclassical a
 pproximation of this quantum optical system by presenting a comprehensive 
 bifurcation analysis. We characterised different transitions of chaotic at
 tractors in parameter plane by numerically computing tangency bifurcations
  between stable and unstable manifolds of saddle equilibria and periodic o
 rbits. By doing so\, we identify codimension-two degenerate singular cycle
 s\, and their generalisations\, as responsible for the organisations of di
 fferent tangency and heteroclinic bifurcations between saddle equilibria p
 eriodic orbits in parameter plane. Thus\, we provide a roadmap for observa
 ble chaotic dynamics in the semiclassical approximation of the two-site Bo
 se--Hubbard dimer model\, which connects novel results in bifurcation theo
 ry with novel applications through numerical continuation techniques.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/36/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Haesun Park (Georgia Institute of Technology)
DTSTART:20211206T210000Z
DTEND:20211206T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/37
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/37/">Multi-view Unsupervised and Semi-Supervised Clustering based o
 n Content and Connection Information</a>\nby Haesun Park (Georgia Institut
 e of Technology) as part of CRM Applied Math Seminar\n\n\nAbstract\nConstr
 ained Low Rank Approximation (CLRA) is a powerful framework for a variety 
 of important tasks in large scale data analytics such as topic discovery i
 n text data and community detection in social network data. In this talk\,
  a hybrid method called Joint Nonnegative Matrix Factorization (JointNMF) 
 is introduced for latent information discovery from multi-view data sets t
 hat contain both text content and connection structure information. The me
 thod jointly optimizes an integrated objective function\, which is a combi
 nation of the Nonnegative Matrix Factorization (NMF) objective function fo
 r handling text content/attribute information and the Symmetric NMF (SymNM
 F) objective function for handling relation/connection information. An eff
 ective algorithm for the joint NMF objective function is proposed utilizin
 g the block coordinate descent (BCD) method.\nThe proposed hybrid method s
 imultaneously discovers content associations and related latent connection
 s without any need for post-processing or additional clustering. In additi
 on\, known partial label information can be incorporated into a JointNMF f
 or semi-supervised clustering framework. The experimental results from sev
 eral real-life application problems illustrate the advantages of the propo
 sed approaches.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/37/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alex Bihlo (Memorial University)
DTSTART:20220110T210000Z
DTEND:20220110T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/38
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/38/">Deep neural networks for solving differential equations on gen
 eral orientable surfaces</a>\nby Alex Bihlo (Memorial University) as part 
 of CRM Applied Math Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/38/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Matus Benko (Johannes Kepler University Linz)
DTSTART:20220117T210000Z
DTEND:20220117T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/39
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/39/">Variational Analysis: Basics\, Calculus\, and Semismoothness*<
 /a>\nby Matus Benko (Johannes Kepler University Linz) as part of CRM Appli
 ed Math Seminar\n\n\nAbstract\nThe purpose of this talk is to offer a brie
 f introduction into set-valued and variational analysis and to try to moti
 vate the study of this area. To this end\, we first discuss some basic not
 ions and ideas. Namely\, we try to explain why set-valued mappings should 
 be analyzed\, what properties of such mappings seems to be useful and are 
 typically studied\, as well as how one can analyze them\, i.e.\, what are 
 the available tools. It should not be very surprising that\, just like in 
 the standard analysis of functions\, derivatives play a crucial role. Thus
 \, we clarify how to differentiate set-valued mappings using the machinery
  of variational geometry (tangent and normal cones). Then we discuss in mo
 re depth the topic of calculus rules that enable one to properly manipulat
 e with generalized derivatives and apply them to practically relevant prob
 lems. We conclude with some remarks about the new property of semismoothne
 ss* for set-valued mappings and the related Newton method for solving gene
 ralized equations (inclusions).\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/39/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Rolnick (McGill University)
DTSTART:20220124T210000Z
DTEND:20220124T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/40
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/40/">Expressivity and learnability in deep neural networks</a>\nby 
 David Rolnick (McGill University) as part of CRM Applied Math Seminar\n\n\
 nAbstract\nIn this talk\, we show that there is a large gap between the ma
 ximum complexity of the functions that a neural network can express and th
 e expected complexity of the functions that it learns in practice.  Deep R
 eLU networks are piecewise linear functions\, and the number of distinct l
 inear regions is a natural measure of their expressivity.  It is well-know
 n that the maximum number of linear regions grows exponentially with the d
 epth of the network\, and this has often been used to explain the success 
 of deeper networks.  We show that the expected number of linear regions in
  fact grows polynomially in the size of the network\, far below the expone
 ntial upper bound and independent of the depth of the network.  This state
 ment holds true both at initialization and after training\, under natural 
 assumptions for gradient-based learning algorithms.  We also show that the
  linear regions of a ReLU network reveal information about the network's p
 arameters.  In particular\, it is possible to reverse-engineer the weights
  and architecture of an unknown deep ReLU network merely by querying it.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/40/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sebastien Le Digabel (Polytechnique Montreal)
DTSTART:20220214T210000Z
DTEND:20220214T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/41
DESCRIPTION:by Sebastien Le Digabel (Polytechnique Montreal) as part of CR
 M Applied Math Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/41/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tom Trogdon (U of Washington)
DTSTART:20220221T210000Z
DTEND:20220221T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/42
DESCRIPTION:by Tom Trogdon (U of Washington) as part of CRM Applied Math S
 eminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/42/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Guy Wolf (UdeM)
DTSTART:20220307T210000Z
DTEND:20220307T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/43
DESCRIPTION:by Guy Wolf (UdeM) as part of CRM Applied Math Seminar\n\nAbst
 ract: TBA\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/43/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Margarida Carvalho (UdeM)
DTSTART:20220321T200000Z
DTEND:20220321T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/44
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/44/">Mathematical Programming Games: pushing the limits of equilibr
 ia computation</a>\nby Margarida Carvalho (UdeM) as part of CRM Applied Ma
 th Seminar\n\n\nAbstract\nMathematical programming games (MPGs) encompass 
 flexible problem modeling when decision makers interact.  Through them\, w
 e can reflect each player's goal in the game through a parametric optimiza
 tion problem.  In this talk\, we will first provide examples of such games
  and their MPG formulation.  Then\, we will focus on MPGs where decisions 
 can take integer values\, the so-called integer programming games.  We wil
 l also discuss Nash games among Stackelberg leaders.  The theoretical intr
 actability of these games will be presented as well as algorithmic schemes
  to solve them in practice.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/44/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Soledad Villar (Johns Hopkins)
DTSTART:20220404T200000Z
DTEND:20220404T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/45
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/45/">Units-equivariant machine learning</a>\nby Soledad Villar (Joh
 ns Hopkins) as part of CRM Applied Math Seminar\n\n\nAbstract\nWe combine 
 ideas from dimensional analysis and from equivariant machine learning to p
 rovide an approach for units-equivariant machine learning. Units equivaria
 nce is the exact symmetry that follows from the requirement that relations
 hips among measured quantities must obey self-consistent dimensional scali
 ngs. Our approach is to construct a dimensionless version of the learning 
 task\, using classic results from dimensional analysis\, and then perform 
 the learning task in the dimensionless space. This approach can be used to
  impose units equivariance on almost any contemporary machine-learning met
 hods\, including those that are equivariant to rotations and other groups.
  Units equivariance is expected to be particularly valuable in the context
 s of symbolic regression and emulation. We discuss the in-sample and out-o
 f-sample prediction accuracy gains one can obtain if exact units equivaria
 nce is imposed\; the symmetry is extremely powerful in some contexts. We i
 llustrate these methods with simple numerical examples involving dynamical
  systems in physics and ecology.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/45/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Giang Tran (University of Waterloo)
DTSTART:20220411T200000Z
DTEND:20220411T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/46
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/46/">Sparse Random Feature Models: Theoretical Guarantees and Appli
 cations</a>\nby Giang Tran (University of Waterloo) as part of CRM Applied
  Math Seminar\n\n\nAbstract\nRandom feature methods have been successful i
 n various machine learning tasks\, are easy to compute\, and come with the
 oretical accuracy bounds.  They serve as an alternative approach to standa
 rd neural networks since they can represent similar function spaces withou
 t a costly training phase.  However\, for accuracy\, random feature method
 s require more measurements than trainable parameters\, limiting their use
  for data-scarce applications or problems in scientific machine learning. 
  In this talk\, we will introduce the sparse random feature expansion to o
 btain parsimonious random feature models.  Specifically\, we leverage idea
 s from compressive sensing to generate random feature expansions with theo
 retical guarantees even in the data-scarce setting.  We also present a ran
 dom feature model for approximating high-dimensional sparse additive funct
 ions and a sparse random mode decomposition to extract intrinsic modes fro
 m challenging time-series data.  Comparisons show that our proposed approa
 ches perform better or are comparable to other state-of-the-art or popular
  methods.  Applications of our methods on identifying important variables 
 in high-dimensional settings as well as on decomposing music pieces and vi
 sualizing black-hole mergers will be addressed.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/46/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Fabian Pedregosa (Google)
DTSTART:20220207T210000Z
DTEND:20220207T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/47
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/47/">Efficient and Modular Implicit Differentiation</a>\nby Fabian 
 Pedregosa (Google) as part of CRM Applied Math Seminar\n\n\nAbstract\nAuto
 matic differentiation (autodiff) has revolutionized machine learning.  It 
 allows expressing complex computations by composing elementary ones in cre
 ative ways and removes the burden of computing their derivatives by hand. 
  More recently\, differentiation of optimization problem solutions has att
 racted widespread attention with applications such as optimization layers\
 , and in bi-level problems such as hyper-parameter optimization and meta-l
 earning.  However\, so far\, implicit differentiation remained difficult t
 o use for practitioners\, as it often required case-by-case tedious mathem
 atical derivations and implementations.  In this paper\, we propose a unif
 ied\, efficient and modular approach for implicit differentiation of optim
 ization problems.  In our approach\, the user defines directly in Python a
  function F capturing the optimality conditions of the problem to be diffe
 rentiated.  Once this is done\, we leverage autodiff of F and implicit dif
 ferentiation to automatically differentiate the optimization problem.  Our
  approach thus combines the benefits of implicit differentiation and autod
 iff.  It is efficient as it can be added on top of any state-of-the-art so
 lver and modular as the optimality condition specification is decoupled fr
 om the implicit differentiation mechanism.  We show that seemingly simple 
 principles allow to recover many exiting implicit differentiation methods 
 and create new ones easily.  We demonstrate the ease of formulating and so
 lving bi-level optimization problems using our framework.  We also showcas
 e an application to the sensitivity analysis of molecular dynamics.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/47/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Anna Ma (University of California\, Irvine)
DTSTART:20220314T200000Z
DTEND:20220314T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/48
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/48/">The Kaczmarz Algorithm: Greed\, Randomness\, and Tensors</a>\n
 by Anna Ma (University of California\, Irvine) as part of CRM Applied Math
  Seminar\n\n\nAbstract\nIn settings where data sets become extremely large
 -scale\, stochastic iterative methods such as the Kaczmarz algorithm and R
 andomized Coordinate Descent become advantageous due to their low memory f
 ootprint.  The Randomized Kaczmarz algorithm in particular has garnered at
 tention owing to its applicability in large-scale settings and its elegant
  geometric interpretation.  In this talk\, we will discuss the Randomized 
 Kaczmarz algorithm\, it's connection to the popular Stochastic Gradient De
 scent algorithm and it's greedy counter-part: Motzkin's Method.  This pres
 entation contains joint work with Jamie Haddock and Denali Molitor.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/48/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tom Trogdon (U of Washington)
DTSTART:20220328T200000Z
DTEND:20220328T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/49
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/49/">Perturbations of orthogonal polynomials: Riemann-Hilbert probl
 ems\, random matrices and numerical linear algebra</a>\nby Tom Trogdon (U 
 of Washington) as part of CRM Applied Math Seminar\n\n\nAbstract\nWe consi
 der the perturbation of orthogonal polynomials (OPs) with respect to chang
 es in the orthogonality measure.  While the transformation from a measure 
 to its orthogonal polynomials is typically ill-conditioned as the degree o
 f the polynomial grows\, using the Fokas-Its-Kitaev Riemann--Hilbert probl
 em\, we show that in certain settings this mapping is well-conditioned.  A
  usable perturbation theory can then be obtained.  The results are strengt
 hened when the asymptotics of the OPs with respect to a limiting measure a
 re known. Our main applications are to random matrices and to numerical al
 gorithms and dynamical systems applied to these random matrices.  This is 
 joint work with Percy Deift\, Xuicai Ding and Elliot Paquette.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/49/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bruno Després (Jacques-Louis Lions Laboratory)
DTSTART:20220519T200000Z
DTEND:20220519T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/50
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/50/">Neural Networks from the viewpoint of Numerical Analysis</a>\n
 by Bruno Després (Jacques-Louis Lions Laboratory) as part of CRM Applied 
 Math Seminar\n\n\nAbstract\nThe presentation will focus on the interplay b
 etween\, on the one hand Neural Networks and Machine Learning which are em
 erging hot topics\, and on the other hand Numerical Analysis which is now 
 a classical topic. The Yarotsky theorem will be discussed together with a 
 recent alternative to the polarisation formula (D.-Ancellin 19'). The stab
 ility of the Adam algorithm will be shown with a particular Lyapunov funct
 ion. Discretization of transport equations for CFD will serve as an applic
 ative illustration.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/50/
END:VEVENT
BEGIN:VEVENT
SUMMARY:James Forbes (McGill)
DTSTART:20220912T200000Z
DTEND:20220912T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/51
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/51/">Regularization Techniques in Koopman-based System Identificati
 on</a>\nby James Forbes (McGill) as part of CRM Applied Math Seminar\n\n\n
 Abstract\nUsing the Koopman operator\, nonlinear systems can be expressed 
 as infinite-dimensional linear systems. Data-driven methods can then be us
 ed to approximate a finite-dimensional Koopman operator\, which is particu
 larly useful for system identification\, control\, and state estimation ta
 sks. However\, approximating large Koopman operators is numerically challe
 nging\, leading to unstable Koopman operators being identified for otherwi
 se stable systems.\nThis talk will present a selection of techniques to re
 gularize the Koopman regression problem\, including a novel H-infinity nor
 m regularizer. In particular\, how to re-pose the system identification pr
 oblem in order to leverage numerically efficient optimization tools\, such
  as linear matrix inequalities\, will be presented. This talk is based on 
 the pre-print arxiv\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/51/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Quentin Bertrand (Mila)
DTSTART:20220919T200000Z
DTEND:20220919T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/52
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/52/">Implicit Differentiation in Non-Smooth Convex Learning</a>\nby
  Quentin Bertrand (Mila) as part of CRM Applied Math Seminar\n\n\nAbstract
 \nFinding the optimal hyperparameters of a model can be cast as a bilevel 
 optimization problem\, typically solved zero-order techniques. In this wor
 k we study first-order methods when the inner optimization problem is conv
 ex but non-smooth. We show that the forward-mode differentiation of proxim
 al gradient descent and proximal coordinate descent yield sequences of Jac
 obians converging toward the exact Jacobian. Using implicit differentiatio
 n\, we show it is possible to leverage the non-smoothness of the inner pro
 blem to speed up the computation. Finally\, we provide a bound on the erro
 r made on the hypergradient when the inner optimization problem is solved 
 approximately. Results on regression and classification problems reveal co
 mputational benefits for hyperparameter optimization\, especially when mul
 tiple hyperparameters are required.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/52/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Akitoshi Takayasu (University of Tsukuba)
DTSTART:20220926T200000Z
DTEND:20220926T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/53
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/53/">A general approach for rigorously integrating PDEs using semig
 roup theory</a>\nby Akitoshi Takayasu (University of Tsukuba) as part of C
 RM Applied Math Seminar\n\n\nAbstract\nIn this talk we introduce a general
  rigorous PDE integrator that proves the existence of a solution to the Ca
 uchy problem of time-dependent PDEs. We derive a fixed-point formulation t
 o prove the existence of a solution locally in time\, which is based on th
 e solution map of a linearized problem called evolution operator. Using ri
 gorous numerics we validate the contraction of the fixed-point form on a n
 eighborhood of a numerically computed approximate solution. Then we extend
  the time interval to exist the solution via time stepping. The main advan
 tage of our approach is that the rigorous integrator can be applied to a g
 eneral class of PDEs\, even performed for higher spatial dimensional PDEs.
 \nThis is joint work with Jean-Philippe Lessard and Gabriel Duchesne.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/53/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Goluskin (U. of Victoria)
DTSTART:20221003T200000Z
DTEND:20221003T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/54
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/54/">Verifying global stability of fluid flows despite transient gr
 owth of energy</a>\nby David Goluskin (U. of Victoria) as part of CRM Appl
 ied Math Seminar\n\n\nAbstract\nVerifying nonlinear stability of a laminar
  fluid flow against all perturbations is a classic challenge in fluid dyna
 mics. All past results rely on monotonic decrease of a perturbation energy
  or a similar quadratic generalized energy. This "energy method" cannot sh
 ow global stability of any flow in which perturbation energy may grow tran
 siently. For the many flows that allow transient energy growth but seem to
  be globally stable (e.g. pipe flow and other parallel shear flows at cert
 ain Reynolds numbers) there has been no way to mathematically verify globa
 l stability. After explaining why the energy method was the only way to ve
 rify global stability of fluid flows for over 100 years\, I will describe 
 a different approach that is broadly applicable but more technical. This a
 pproach\, proposed in 2012 by Goulart and Chernyshenko\, uses sum-of-squar
 es polynomials to computationally construct non-quadratic Lyapunov functio
 ns that decrease monotonically for all flow perturbations. I will present 
 a computational implementation of this approach for the example of 2D plan
 e Couette flow\, where we have verified global stability at Reynolds numbe
 rs above the energy stability threshold. This energy stability result for 
 2D Couette flow had not been improved upon since being found by Orr in 190
 7. The results I will present are the first verification of global stabili
 ty â€“ for any fluid flow â€“ that surpasses the energy method. 
 This is joint work with Federico Fuentes (Universidad CatÃ³lica de Chile
 ) and Sergei Chernyshenko (Imperial College London).\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/54/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nicola Guglielmi (Gran Sasso Science Institute)
DTSTART:20221017T200000Z
DTEND:20221017T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/55
DESCRIPTION:by Nicola Guglielmi (Gran Sasso Science Institute) as part of 
 CRM Applied Math Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/55/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Robert Baraldi (Sandia National Labs)
DTSTART:20221024T200000Z
DTEND:20221024T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/56
DESCRIPTION:by Robert Baraldi (Sandia National Labs) as part of CRM Applie
 d Math Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/56/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Serge Prudhomme (Polytechnique Montreal)
DTSTART:20221107T210000Z
DTEND:20221107T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/57
DESCRIPTION:by Serge Prudhomme (Polytechnique Montreal) as part of CRM App
 lied Math Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/57/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Kimon Fountoulakis (U. of Waterloo)
DTSTART:20221128T210000Z
DTEND:20221128T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/58
DESCRIPTION:by Kimon Fountoulakis (U. of Waterloo) as part of CRM Applied 
 Math Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/58/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Stephanie Dodson (Colby College)
DTSTART:20221205T210000Z
DTEND:20221205T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/59
DESCRIPTION:by Stephanie Dodson (Colby College) as part of CRM Applied Mat
 h Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/59/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tatiana Bubba (University of Bath)
DTSTART:20221114T200000Z
DTEND:20221114T210000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/60
DESCRIPTION:by Tatiana Bubba (University of Bath) as part of CRM Applied M
 ath Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/60/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ashwin Pananjady (Georgia Tech)
DTSTART:20221121T210000Z
DTEND:20221121T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/61
DESCRIPTION:by Ashwin Pananjady (Georgia Tech) as part of CRM Applied Math
  Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/61/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Test (test)
DTSTART:20221212T210000Z
DTEND:20221212T220000Z
DTSTAMP:20260422T225700Z
UID:AppliedMathematics/62
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMathe
 matics/62/">test</a>\nby Test (test) as part of CRM Applied Math Seminar\n
 \n\nAbstract\nA tensor T(x_1\, ...\, x_n) is a multilinear function of the
  input vectors x_j in F_q^n\, F_q  a finite field. T has a small analytic 
 rank if its output distribution is far from uniform. It has partition rank
  `r' if we can write T = f_1 * g_1 + ... + f_r * g_r\, where f_r and g_r a
 re tensors in fewer variables. Analytic rank measures the amount of random
 ness\, and partition rank measures the amount of structure. It is known th
 at if `f' has small partition rank\, it must have small analytic rank. Gre
 en and Tao proved an inverse theorem stating that if `f' has small analyti
 c rank then it has small partition rank. Their bound was qualitative\, how
 ever\, and several authors gave quantitative improvements. Janzer and Mili
 cevic independently proved a polynomial dependence. We prove an optimal in
 verse theorem: the analytic rank and partition rank are equivalent up to c
 onstant factors over large enough fields. Our techniques are very differen
 t from the usual methods in this area\, we rely on algebraic geometry rath
 er than additive combinatorics. This is joint work with Guy Moshkovitz.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/62/
END:VEVENT
END:VCALENDAR
