BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Alex Townsend (Cornell University)
DTSTART;VALUE=DATE-TIME:20200427T200000Z
DTEND;VALUE=DATE-TIME:20200427T210000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/1
DESCRIPTION:Title: The ultraspherical spectral method\nby Alex Townsend (Corne
ll University) as part of CRM Applied Math Seminar\n\nLecture held in Webi
nar.\n\nAbstract\nPseudospectral methods\, based on high degree polynomial
s\, have spectral accuracy when solving differential equations but typical
ly lead to dense and ill-conditioned matrices. The ultraspherical spectral
method is a numerical technique to solve ordinary and partial differentia
l equations\, leading to almost banded well-conditioned linear systems whi
le maintaining spectral accuracy. In this talk\, we introduce the ultrasph
erical spectral method and develop it into a spectral element method using
a modification to a hierarchical Poincaré-Steklov domain decomposition m
ethod.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Thomas Bury (McGill University)
DTSTART;VALUE=DATE-TIME:20200511T200000Z
DTEND;VALUE=DATE-TIME:20200511T210000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/2
DESCRIPTION:Title: Detecting and distinguishing bifurcations from noisy time serie
s data\nby Thomas Bury (McGill University) as part of CRM Applied Math
Seminar\n\nLecture held in Webinar.\n\nAbstract\nNumerous systems in the
natural sciences have the capacity to undergo an abrupt change in their dy
namical behaviour as a threshold is crossed. Prominent examples include th
e collapse of fisheries\, algal blooms and paleoclimatic transitions. Math
ematical models reveal such transitions as the result of crossing a bifurc
ation and help to elucidate the underlying mechanisms. However\, the numbe
r of unknowns is often large\, making it difficult to infer where the bifu
rcation occurs in the real system.\nIn this talk\, we will look at methods
for detecting bifurcations using data-driven approaches. These methods ex
ploit generic dynamical phenomena that occur prior to bifurcations\, such
as critical slowing down\, in order to infer their approach. We will show
how the power spectrum of noisy time series data provides information on t
he type of bifurcation and validate this approach with empirical predator-
prey experiment that undergoes a Hopf bifurcation. Finally\, we will explo
re deep learning methods for detection of bifurcations and make comparison
to the more traditional statistical methods in their ability to detect bi
furcations.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bamdad Hosseini (California Institute of Technology)
DTSTART;VALUE=DATE-TIME:20200622T200000Z
DTEND;VALUE=DATE-TIME:20200622T210000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/3
DESCRIPTION:Title: Data-driven supervised learning: Neural networks and uncertaint
y quantification\nby Bamdad Hosseini (California Institute of Technolo
gy) as part of CRM Applied Math Seminar\n\n\nAbstract\nIn this talk I will
discuss some ideas at the intersection of machine learning and uncertaint
y quantification with a particular focus on data-driven methods that do no
t require explicit knowledge of processes that generate the data. In the
first half of the talk I will discuss supervised learning on Banach spaces
for emulation of PDE based models and outline a method that combines prin
cipal component analysis with neural network regression for mesh-independe
nt approximation of PDE solutions. In the second half I will take a diffe
rent approach to supervised learning viewing it as a conditional sampling
problem. I will then introduce a measure transport framework based on gen
erative adversarial networks (GANs) for data-driven conditional sampling.
\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Theodore Kolokolnikov (Dalhousie University)
DTSTART;VALUE=DATE-TIME:20200629T200000Z
DTEND;VALUE=DATE-TIME:20200629T210000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/4
DESCRIPTION:Title: Simple agent-based models and their continuum limit\nby The
odore Kolokolnikov (Dalhousie University) as part of CRM Applied Math Semi
nar\n\n\nAbstract\nWe discuss several very different ABM models and their
continuum limits.\n\nFirst\, consider the following agent-based model of c
oronavirus spread: people move randomly and infection occurs with some non
zero probability when an infected individual comes within a certain ``infe
ction radius'' of a susceptible individual. The question is how the infect
ion radius affects the reproduction number. At low infection rates\, this
model leads to the classical S-I-R ODE model as its continuum limit. Howev
er higher infection rates lead to a saturation effect\, which we compute e
xplicitly using basic probability theory. Its continuum limit It leads to
an S-I-R type model with a specific saturation term. We also show that thi
s modified model gives a much better fit to the real-world data than the c
lassical SIR model.\n\nNext\, we will look at a very simple stochastic mod
el of bacterial aggregation which leads to a novel fourth-order nonlinear
PDE in its continuum limit. This PDE admits soliton-type solutions corresp
onding to bacterial aggregation patterns\, which we explicitly construct.
\n\nIf time allows\, we will consider a spatial model of wealth exchange w
hich leads to novel integro-differential equations.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Stephen Becker (University of Colorado Boulder\, USA)
DTSTART;VALUE=DATE-TIME:20200921T183000Z
DTEND;VALUE=DATE-TIME:20200921T193000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/5
DESCRIPTION:Title: Algorithmic stability for generalization guarantees in machine
learning\nby Stephen Becker (University of Colorado Boulder\, USA) as
part of CRM Applied Math Seminar\n\n\nAbstract\nInspired by the practical
success of deep learning\, the broader math community has been energized r
ecently to find theoretical justification for these methods. There is a la
rge amount of theory from the computer science community\, dating to the 1
980s and earlier\, but usually the quantitative guarantees are too loose t
o be helpful in practice\, and it is rare that theory can predict somethin
g useful (such as what iteration to perform early-stopping in order to pre
vent over-fitting). \nMany of these theories are less well-known inside ap
plied math\, so we briefly review essential results before focusing on the
notion of algorithmic stability\, popularized in the early 2000s\, which
is an alternative to the more mainstream VC dimension approach\, and is on
e avenue that might give sharper theoretical guarantees. Algorithmic stabi
lity is appealing to applied mathematicians\, and in particular analysts\,
since a lot of the technical work is similar to analysis used for converg
ence proofs. \nWe give an overview of the fundamental results of algorithm
ic stability\, focusing on the stochastic gradient descent (SGD) method in
the context of a nonconvex loss function\, and give the latest state-of-t
he-art bounds\, including some of our own work (joint with L. Madden and E
. Dall'Anese) which is one of the first results that suggests when to do e
arly-stopping.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yuji Nakatsukasa (Oxford University\, UK)
DTSTART;VALUE=DATE-TIME:20200928T200000Z
DTEND;VALUE=DATE-TIME:20200928T210000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/6
DESCRIPTION:Title: Fast and stable randomized low-rank matrix approximation\nb
y Yuji Nakatsukasa (Oxford University\, UK) as part of CRM Applied Math Se
minar\n\n\nAbstract\nRandomized SVD has become an extremely successful app
roach for efficiently computing a low-rank approximation of matrices. In p
articular the paper by Halko\, Martinsson\, and Tropp (SIREV 2011) contain
s extensive analysis\, and has made it a very popular method. The typical
complexity for a rank-r approximation of mxn matrices is O(mnlog n+(m+n)r^
2) for dense matrices. The classical Nystrom method is much faster\, but o
nly applicable to positive semidefinite matrices. This work studies a gene
ralization of Nystrom's method applicable to general matrices\, and shows
that (i) it has near-optimal approximation quality comparable to competing
methods\, (ii) the computational cost is the near-optimal O(mnlog n+r^3)
for dense matrices\, with small hidden constants\, and (iii) crucially\, i
t can be implemented in a numerically stable fashion despite the presence
of an ill-conditioned pseudoinverse. Numerical experiments illustrate that
generalized Nystrom can significantly outperform state-of-the-art methods
\, especially when r>>1\, achieving up to a 10-fold speedup. The method is
also well suited to updating and downdating the matrix.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Russell Luke (Universität Göttingen\, Germany)
DTSTART;VALUE=DATE-TIME:20201005T200000Z
DTEND;VALUE=DATE-TIME:20201005T210000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/7
DESCRIPTION:Title: Optimization on Spheres : Models and Proximal Algorithms with C
omputational Performance Comparisons\nby David Russell Luke (Universit
ät Göttingen\, Germany) as part of CRM Applied Math Seminar\n\n\nAbstrac
t\nWe present a unified treatment of the abstract problem of finding the b
est approximation between a cone and spheres in the image of affine transf
ormations. Prominent instances of this problem are phase retrieval and sou
rce localization. The common geometry binding these problems permits a gen
eric application of algorithmic ideas and abstract convergence results for
nonconvex optimization. We organize variational models for this problem i
nto three different classes and derive the main algorithmic approaches wit
hin these classes (13 in all). We identify the central ideas underlying th
ese methods and provide thorough numerical benchmarks comparing their perf
ormance on synthetic and laboratory data. The software and data of our exp
eriments are all publicly accessible. We also introduce one new algorithm\
, a cyclic relaxed Douglas-Rachford algorithm\, which outperforms all othe
r algorithms by every measure: speed\, stability and accuracy. The analysi
s of this algorithm remains open.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sheehan Olver (Imperial College London\, UK)
DTSTART;VALUE=DATE-TIME:20201019T200000Z
DTEND;VALUE=DATE-TIME:20201019T210000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/8
DESCRIPTION:Title: Sparse Spectral Methods for Power-Law Interactions\nby Shee
han Olver (Imperial College London\, UK) as part of CRM Applied Math Semin
ar\n\n\nAbstract\nAttractive-repulsive power law equilbriums are an import
ant tool in modelling phenomena in collective behaviour: picture a flock o
f birds which simultaneously group together\, but not too closely (i.e.\,
they practice social distancing)\, until an equilibrium distribution is re
ached. In this talk we show that orthogonal polynomials have sparse recurr
ence relationships for power law (Riesz) kernels. This leads to highly str
uctured and efficiently solvable linear systems for the attractive-repulsi
ve case with two such kernels of opposite sign\, giving an effective numer
ical method for computing such equilibrium distributions. This links to an
d builds on related work in logarithmic potential theory\, singular integr
al equations\, and fractional differential equations.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Johannes Royset (Naval Postgraduate School\, USA)
DTSTART;VALUE=DATE-TIME:20201026T200000Z
DTEND;VALUE=DATE-TIME:20201026T210000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/9
DESCRIPTION:Title: Variational Perspectives on Mathematical Optimization\nby J
ohannes Royset (Naval Postgraduate School\, USA) as part of CRM Applied Ma
th Seminar\n\n\nAbstract\nThe mathematical tools for building optimization
models and algorithms grow out of linear algebra\, differential calculus
and real analysis. However\, the needs of applications have led to a new a
rea of mathematics that can handle systems of inequalities and functions t
hat are neither smooth nor well-defined in a traditional sense. Variationa
l analysis is the broad term for this area of mathematics. In this present
ation\, we show its crucial role in the development of optimization models
and algorithms in finite dimensions. First\, we examine variational geome
try and definitions of normal and tangent vectors that extend the classica
l notions for smooth manifolds. This in turn leads to subdifferentiability
\, a wide range of calculus rules and optimality conditions for arbitrary
functions. Second\, we develop an approximation theory for optimization pr
oblems that leads to consistent approximations\, error bounds and rates of
convergence even in the nonconvex and nonsmooth setting.\n\nDr. Johannes
O. Royset is Professor of Operations Research at the Naval Postgraduate Sc
hool. Dr. Royset's research focuses on formulating and solving stochastic
and deterministic optimization problems arising in data analytics\, sensor
management\, and reliability engineering. He was awarded a National Resea
rch Council postdoctoral fellowship in 2003\, a Young Investigator Award f
rom the Air Force Office of Scientific Research in 2007\, and the Barchi P
rize as well as the MOR Journal Award from the Military Operations Researc
h Society in 2009. He received the Carl E. and Jessie W. Menneken Faculty
Award for Excellence in Scientific Research in 2010 and the Goodeve Medal
from the Operational Research Society in 2019. Dr. Royset was a plenary sp
eaker at the International Conference on Stochastic Programming in 2016 an
d at the SIAM Conference on Uncertainty Quantification in 2018. He has a D
octor of Philosophy degree from the University of California at Berkeley (
2002). Dr. Royset has been an associate or guest editor of Operations Rese
arch\, Mathematical Programming\, Journal of Optimization Theory and Appli
cations\, Journal of Convex Analysis\, Set-Valued and Variational Analysis
\, Naval Research Logistics\, and Computational Optimization and Applicati
ons.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Heather Harrington (Oxford University\, UK)
DTSTART;VALUE=DATE-TIME:20201116T210000Z
DTEND;VALUE=DATE-TIME:20201116T220000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/10
DESCRIPTION:Title: Algebraic Systems Biology\nby Heather Harrington (Oxford U
niversity\, UK) as part of CRM Applied Math Seminar\n\n\nAbstract\nSignall
ing pathways in molecular biology can be modelled by polynomial dynamical
systems. I will present models describing two biological systems involved
in development and cancer. I will overview approaches to analyse these mod
els with data using computational algebraic geometry\, differential algebr
a and statistics. Finally\, I will present how topological data analysis c
an provide additional information to distinguish wild-type and mutant mole
cules in one pathway. These case studies showcase how computational geomet
ry\, topology and dynamics can provide new insights in the biological syst
ems\, specifically how changes at the molecular scale (e.g. molecular muta
tions) result in kinetic differences that are observed as phenotypic chang
es (e.g.mutations in fruit fly wings).\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Thomas Surowiec (Philipps-Universität Marburg\, Germany)
DTSTART;VALUE=DATE-TIME:20201123T210000Z
DTEND;VALUE=DATE-TIME:20201123T220000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/11
DESCRIPTION:Title: A Primal-Dual Algorithm for Risk Minimization in PDE-Constrain
ed Optimization\nby Thomas Surowiec (Philipps-Universität Marburg\, G
ermany) as part of CRM Applied Math Seminar\n\n\nAbstract\nWe present an a
lgorithm for the solution of risk-averse optimization problems. The settin
g is sufficiently general so as to encompass both finite-dimensional and P
DE-constrained stochastic optimization problems. Due to a lack of smoothne
ss of many popular risk measures and non-convexity of the objective functi
ons\, both the numerical approximation and numerical solution is a major c
omputational challenge. The proposed algorithm addresses these issues in p
art by making use of the favorable dual properties of coherent risk measur
es. The algorithm itself is motivated by the classical method of multiplie
rs and exploits recent results on epigraphical regularization of risk meas
ures. Consequently\, the algorithm requires the solution of a sequence of
smooth problems using derivative-based methods. We prove convergence of th
e algorithm in the fully continuous setting and conclude with several nume
rical examples. The algorithm is seen to outperform a popular bundle-trust
method and a direct smoothing-plus-continuation approach.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Damek Davis (Cornell University\, USA)
DTSTART;VALUE=DATE-TIME:20201130T210000Z
DTEND;VALUE=DATE-TIME:20201130T220000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/12
DESCRIPTION:Title: Nonconvex Optimization for Estimation and Learning: Dynamics\,
Conditioning\, and Nonsmoothness\nby Damek Davis (Cornell University\
, USA) as part of CRM Applied Math Seminar\n\n\nAbstract\nNonconvex optimi
zation algorithms play a major role in solving statistical estimation and
learning problems. Indeed\, simple nonconvex heuristics\, such as the stoc
hastic gradient method\, often provide satisfactory solutions in practice\
, despite such problems being NP hard in the worst case. Key examples incl
ude deep neural network training and signal estimation from nonlinear meas
urements. While practical success stories are common\, strong theoretical
guarantees\, are rarer. The purpose of this talk is to overview a few (hig
hly non exhaustive!) settings where rigorous performance guarantees can be
established for nonconvex optimization\, focusing on the interplay of alg
orithm dynamics\, problem conditioning\, and nonsmoothness.\n\nBio: Damek
Davis received his Ph.D. in mathematics from the University of California\
, Los Angeles in 2015. In July 2016 he joined Cornell University's School
of Operations Research and Information Engineering as an Assistant Profess
or. Damek is broadly interested in the mathematics of data science\, parti
cularly the interplay of optimization\, signal processing\, statistics\, a
nd machine learning. He is the recipient of several awards\, including the
INFORMS Optimization Society Young Researchers Prize in (2019) and a Sloa
n Research Fellowship in Mathematics (2020).\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tim Hoheisel (McGill University)
DTSTART;VALUE=DATE-TIME:20210111T210000Z
DTEND;VALUE=DATE-TIME:20210111T220000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/13
DESCRIPTION:Title: Halting Time is Predictable for Large Models: A Universality P
roperty and Average-case Analysis\nby Tim Hoheisel (McGill University)
as part of CRM Applied Math Seminar\n\n\nAbstract\nAverage-case analysis
computes the complexity of an algorithm averaged over all possible inputs.
Compared to worst-case analysis\, it is more representative of the typica
l behavior of an algorithm\,but remains largely unexplored in optimization
. One difficulty is that the analysis can depend on the probability distri
bution of the inputs to the model. However\, we show that this is not the
case for a class of large-scale problems trained with first-order methods
including random least squares and one-hidden layer neural networks with r
andom weights. In fact\, the halting time exhibits a universality propert
y: it is independent of the probability distribution. With this barrier fo
r average-case analysis removed\, we provide the first explicit average-ca
se convergence rates showing a tighter complexity not captured by traditio
nal worst-case analysis. Finally\, numerical simulations suggest this univ
ersality property holds for a more general class of algorithms and problem
s.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael P. Friedlander (University of British Columbia)
DTSTART;VALUE=DATE-TIME:20210118T210000Z
DTEND;VALUE=DATE-TIME:20210118T220000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/14
DESCRIPTION:Title: Polar deconvolution of mixed signals\nby Michael P. Friedl
ander (University of British Columbia) as part of CRM Applied Math Seminar
\n\n\nAbstract\nThe signal demixing problem seeks to separate the superpos
ition of multiple signals into its constituent components. We model the s
uperposition process as the polar convolution of atomic sets\, which allow
s us to use the duality of convex cones to develop an efficient two-stage
algorithm with sublinear iteration complexity and linear storage. If the
signal measurements are random\, the polar deconvolution approach stably r
ecovers low-complexity and mutually-incoherent signals with high probabili
ty and with optimal sample complexity. Numerical experiments on both real
and synthetic data confirm the theory and efficiency of the proposed appr
oach. Joint work with Zhenan Fan\, Halyun Jeong\, and Babhru Joshi at the
University of British Columbia.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nathan Kutz (University of Washington)
DTSTART;VALUE=DATE-TIME:20210125T210000Z
DTEND;VALUE=DATE-TIME:20210125T220000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/15
DESCRIPTION:Title: Targeted use of deep learning for physics and engineering\
nby Nathan Kutz (University of Washington) as part of CRM Applied Math Sem
inar\n\n\nAbstract\nMachine learning and artificial intelligence algorithm
s are now being used to automate the discovery of governing physical equat
ions and coordinate systems from measurement data alone. However\, positi
ng a universal physical law from data is challenging: (i) An appropriate c
oordinate system must also be advocated and (ii) simultaneously proposing
an accompanying discrepancy model to account for the inevitable mismatch b
etween theory and measurements must be considered. Using a combination of
deep learning and sparse regression\, specifically the sparse identificat
ion of nonlinear dynamics (SINDy) algorithm\, we show how a robust mathema
tical infrastructure can be formulated for simultaneously learning physics
models and their coordinate systems. This can be done with limited data
and sensors. We demonstrate the methods on a diverse number of examples\,
showing how data can maximally be exploited for scientific and engineerin
g applications.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Zhaojun Bai (UC Davis)
DTSTART;VALUE=DATE-TIME:20210201T210000Z
DTEND;VALUE=DATE-TIME:20210201T220000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/16
DESCRIPTION:Title: Rayleigh quotient optimizations and eigenvalue problems\nb
y Zhaojun Bai (UC Davis) as part of CRM Applied Math Seminar\n\n\nAbstract
\nMany computational science and data analysis techniques lead to optimizi
ng Rayleigh quotient (RQ) and RQ-type objective functions\, such as comput
ing excitation states (energies) of electronic structures\, robust classif
ication to handle uncertainty and constrained data clustering to incorpora
te domain knowledge. We will discuss emerging RQ optimization problems\,
variational principles\, and reformulations to algebraic linear and nonlin
ear eigenvalue problems. We will show how to exploit underlying propertie
s of these eigenvalue problems for designing fast solvers\, and illustrate
the efficacy of these solvers in applications.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Marwa El Halabi (MILA)
DTSTART;VALUE=DATE-TIME:20210208T210000Z
DTEND;VALUE=DATE-TIME:20210208T220000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/17
DESCRIPTION:Title: Optimal approximation for unconstrained non-submodular minimiz
ation\nby Marwa El Halabi (MILA) as part of CRM Applied Math Seminar\n
\n\nAbstract\nSubmodular function minimization is well studied\, and exist
ing algorithms solve it exactly or up to arbitrary accuracy. However\, in
many applications\, such as structured sparse learning or batch Bayesian
optimization\, the objective function is not exactly submodular\, but clos
e. In this case\, no theoretical guarantees exist. Indeed\, submodular m
inimization algorithms rely on intricate connections between submodularity
and convexity. We show how these relations can be extended to obtain app
roximation guarantees for minimizing non-submodular functions\, characteri
zed by how close the function is to submodular. We also extend this resul
t to noisy function evaluations. Our approximation results are the first
for minimizing non-submodular functions\, and are optimal\, as established
by our matching lower bound.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Patrick Combettes (NC State)
DTSTART;VALUE=DATE-TIME:20210215T210000Z
DTEND;VALUE=DATE-TIME:20210215T220000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/18
DESCRIPTION:Title: Perspective Functions and Applications\nby Patrick Combett
es (NC State) as part of CRM Applied Math Seminar\n\n\nAbstract\nIn this t
alk I will discuss mathematical and computational issues pertaining to per
spective functions\, a powerful concept that permits to extend a convex fu
nction to a jointly convex one in terms of an additional scale variable. A
pplications in inverse problems and statistics will be presented.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Heinz Bauschke (UBC)
DTSTART;VALUE=DATE-TIME:20210222T210000Z
DTEND;VALUE=DATE-TIME:20210222T220000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/19
DESCRIPTION:Title: Compositions of projection mappings: fixed point sets and diff
erence vectors\nby Heinz Bauschke (UBC) as part of CRM Applied Math Se
minar\n\n\nAbstract\nProjection operators and associated projection algori
thms are fundamental building blocks in fixed point theory and optimizatio
n. In this talk\, I will survey recent results on the displacement mappin
g of the right-shift operator and sketch a new application deepening our u
nderstanding of the geometry of the fixed point set of the composition of
projection operators in Hilbert space. Based on joint works with Salha Al
wadani\, Julian Revalski\, and Shawn Wang.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Paul E. Hand (Northeastern University)
DTSTART;VALUE=DATE-TIME:20210308T210000Z
DTEND;VALUE=DATE-TIME:20210308T220000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/20
DESCRIPTION:Title: Signal Recovery with Generative Priors\nby Paul E. Hand (N
ortheastern University) as part of CRM Applied Math Seminar\n\n\nAbstract\
nRecovering images from very few measurements is an important task in imag
ing problems. Doing so requires assuming a model of what makes some image
s natural. Such a model is called an image prior. Classical priors such
as sparsity have led to the speedup of Magnetic Resonance Imaging in certa
in cases. With the recent developments in machine learning\, neural netwo
rks have been shown to provide efficient and effective priors for inverse
problems arising in imaging. In this talk\, we will discuss the use of ne
ural network generative models for inverse problems in imaging. We will p
resent a rigorous recovery guarantee at optimal sample complexity for comp
ressed sensing and other inverse problems under a suitable random model.
We will see that generative models enable an efficient algorithm for phase
retrieval from generic measurements with optimal sample complexity. In c
ontrast\, no efficient algorithm is known for this problem in the case of
sparsity priors. We will discuss strengths\, weaknesses\, and future oppo
rtunities of neural networks and generative models as image priors. These
works are in collaboration with Vladislav Voroninski\, Reinhard Heckel\,
Ali Ahmed\, Wen Huang\, Oscar Leong\, Jorio Cocola\, Muhammad Asim\, and M
ax Daniels.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Clarice Poon (University of Bath)
DTSTART;VALUE=DATE-TIME:20210315T200000Z
DTEND;VALUE=DATE-TIME:20210315T210000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/21
DESCRIPTION:Title: Off-the-grid sparse estimation\nby Clarice Poon (Universit
y of Bath) as part of CRM Applied Math Seminar\n\n\nAbstract\nThe behaviou
r of sparse regularization using the Lasso method is well understood when
dealing with discretized linear models. However\, the behaviour of Lasso
is poor when dealing with models with very large parameter spaces and in
recent years\, there has been much interest in the use of "off-the-grid"
approaches\, using a continuous parameter space in conjunction with convex
optimization problem over measures. In my talk\, I will present some re
cent results which explain the behaviour of this method in arbitrary dimen
sions. Some highlights include the use of the Fisher metric to study the
performance of Blasso over general domains and the application of this fo
r quantitative MRI.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Olga Mula (Paris Dauphine)
DTSTART;VALUE=DATE-TIME:20210322T200000Z
DTEND;VALUE=DATE-TIME:20210322T210000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/22
DESCRIPTION:Title: Depth-Adaptive Neural Networks from the Optimal Control viewpo
int\nby Olga Mula (Paris Dauphine) as part of CRM Applied Math Seminar
\n\n\nAbstract\nIn recent years\, deep learning has been connected with op
timal control as a way to define a notion of a continuous underlying learn
ing problem. In this view\, neural networks can be interpreted as a disc
retization of a parametric Ordinary Differential Equation which\, in the l
imit\, defines a continuous-depth neural network. The learning task th
en consists in finding the best ODE parameters for the problem under consi
deration\, and their number increases with the accuracy of the time discr
etization. Although important steps have been taken to realize the advant
ages of such continuous formulations\, most current learning techniques f
ix a discretization (i.e.~the number of layers is fixed). In this work\,
we propose an iterative adaptive algorithm where we progressively refine
the time discretization (i.e.~we increase the number of layers). Provided
that certain tolerances are met across the iterations\, we prove that th
e strategy converges to the underlying continuous problem. One salient ad
vantage of such a shallow-to-deep approach is that it helps to benefit in
practice from the higher approximation properties of deep networks by mit
igating over-parametrization issues. The performance of the approach is i
llustrated in several numerical examples.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sasha Aravkin (University of Washington)
DTSTART;VALUE=DATE-TIME:20210412T200000Z
DTEND;VALUE=DATE-TIME:20210412T210000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/23
DESCRIPTION:Title: A tale of two models for Covid-19 scenarios\nby Sasha Arav
kin (University of Washington) as part of CRM Applied Math Seminar\n\n\nAb
stract\nCovid-19 Pandemic is a defining global health event in the 21st ce
ntury. Forecasting the evolution of the pandemic is a key problem for any
one trying to plan ahead. Since March 2020\, IHME has been generating Cov
id-19 scenarios\, first for US states and then for all Admin-1 locations a
round the world. These scenarios have been intensively used\; results are
uploaded weekly to an interactive website: https://covid19.healthdata.org
/ \nIn this talk\, we describe two core mathematical models underlying the
IHME scenarios. The first model\, dubbed CurveFit\, used strong assumpti
ons to get useful predictions using extremely limited data\, and was used
during March and April of 2020. The second model\, a data-driven SEIIR mo
del\, was put in play in June 2020\, and provides a flexible way to incorp
orate relationships with key drivers such as mobility\, mask use\, and pne
umonia seasonality. We describe the mathematics underlying both models\,
and discuss the interplay between stability\, scalability\, and complexity
in mathematical modeling.\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:test
DTSTART;VALUE=DATE-TIME:20210614T200000Z
DTEND;VALUE=DATE-TIME:20210614T210000Z
DTSTAMP;VALUE=DATE-TIME:20210612T225829Z
UID:AppliedMathematics/24
DESCRIPTION:Title: test\nby test as part of CRM Applied Math Seminar\n\nInter
active livestream: https://forms.gle/S1NcNQ8BxkzfAXcj9\n\nAbstract\nWe sho
w that intertwining operators for the discrete Fourier transform form a cu
bic algebra $C_q$ with $q$ a root of unity. This algebra is intimately rel
ated to the two other well-known\nrealizations of the cubic algebra: the A
skey-Wilson algebra and the Askey-Wilson-Heun algebra.\nThis is joint work
with Mesuma Atakishiyeva (Universidad Autónoma del Estado de Morelos\,\n
Centro de Investigación en Ciencias\, Cuernavaca\, 62250\, Morelos\, Méx
ico) and Alexei Zhedanov (School of Mathematics\, Renmin University of Chi
na\, Beijing 100872\, China) curious theorem on S-integrables D$\\Delta$Es
and its consequences\n
LOCATION:https://researchseminars.org/talk/AppliedMathematics/24/
URL:https://forms.gle/S1NcNQ8BxkzfAXcj9
END:VEVENT
END:VCALENDAR