BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:David Williams (Penn State University)
DTSTART:20221104T223000Z
DTEND:20221104T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/2
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 2/">Space-Time Finite Element Methods: Challenges and Perspectives</a>\nby
  David Williams (Penn State University) as part of SFU Mathematics of Comp
 utation\, Application and Data ("MOCAD") Seminar\n\nLecture held in K9509.
 \n\nAbstract\nSpace-time finite element methods (FEMs) are likely to grow 
 in popularity due to the ongoing growth in the size\, speed\, and parallel
 ism of modern computing platforms. The allure of space-time FEMs is both i
 ntuitive and practical. From the intuitive standpoint\, there is considera
 ble elegance and simplicity in accommodating both space and time using the
  same numerical discretization strategy. From the practical standpoint\, t
 here are considerable advantages in efficiency and accuracy that can be ga
 ined from space-time mesh adaptation: i.e. adapting the mesh in both space
  and time to resolve important solution features. However\, despite these 
 considerable advantages\, there are numerous challenges that must be overc
 ome before space-time FEMs can realize their full potential. These challen
 ges are primarily associated with four-dimensional geometric obstacles (hy
 persurface and hypervolume mesh generation)\, four-dimensional approximati
 on theory (basis functions and quadrature rules)\, four-dimensional bounda
 ry condition enforcement (well-posed\, moving boundary conditions)\, and i
 terative-solution techniques for large-scale linear systems. In this prese
 ntation\, we will provide a brief overview of space-time FEMs\, and discus
 s some of the latest research developments and ongoing issues.\n\nDavid M.
  Williams is an assistant professor at The Pennsylvania State University i
 n the Mechanical Engineering Department. He came to Penn State from the Fl
 ight Sciences division of Boeing Commercial Airplanes and Boeing Research 
 and Technology\, where he worked for several years as a computational flui
 d dynamics engineer. Williams received his M.S. and Ph. D. in Aeronautics 
 and Astronautics at Stanford University. He holds a B.S.E. in Aerospace En
 gineering from the University of Michigan. He has made significant advance
 s in the design of numerical algorithms for computational fluid dynamic si
 mulations. Currently\, his research focuses on employing high-order Finite
  Element schemes to more accurately predict unsteady flows.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Daniel Messenger (University of Colorado Boulder)
DTSTART:20221007T223000Z
DTEND:20221007T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/4
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 4/">Weak-form sparse identification of differential equations from noisy m
 easurements</a>\nby Daniel Messenger (University of Colorado Boulder) as p
 art of SFU Mathematics of Computation\, Application and Data ("MOCAD") Sem
 inar\n\nLecture held in K9509.\n\nAbstract\nData-driven modeling refers to
  the use of measurement data to infer the parameters and structure of a ma
 thematical model\, or to aid in forward simulations of a partially known m
 athematical model. Motivated by problems in collective cell biology\, this
  talk will explore algorithms which automate the map from experimental dat
 a to governing differential equations\, specifically using weak formulatio
 ns of the dynamics. We will show that the weak form is an ideal framework 
 for identifying models from data if the performance criteria are robustnes
 s to data corruptions\, highly accurate model recovery when corruption lev
 els are low\, and computational efficiency. We will first demonstrate the 
 advantages of the resulting weak-form sparse identification for nonlinear 
 dynamics algorithm (WSINDy) in the discovery of correct underlying model e
 quations across several key modeling paradigms\, including ordinary differ
 ential equations (ODEs)\, partial differential equations (PDEs)\, and inte
 racting particle systems (IPS). We will then discuss more recent extension
 s of this framework\, including weak-form identification of PDEs from stre
 aming data\, enabling identification of time-varying coefficients\, and th
 e use of weak-form model selection as a classifier to determine species me
 mbership in a heterogeneous population of initially unlabeled cells. We wi
 ll conclude with an overview of possible next directions\, including open 
 questions related to numerical analysis and theoretical recovery guarantee
 s.\n\nPasscode 696604\n
LOCATION:https://researchseminars.org/talk/AppliedMath/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hansol Park (SFU)
DTSTART:20221014T223000Z
DTEND:20221014T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/5
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 5/">The Watanabe-Strogatz transform and constant of motion functionals for
  kinetic vector models</a>\nby Hansol Park (SFU) as part of SFU Mathematic
 s of Computation\, Application and Data ("MOCAD") Seminar\n\n\nAbstract\nW
 e present a kinetic version of the Watanabe-Strogatz (WS) transform for ve
 ctor models in this paper. From the generalized WS-transform\, we can redu
 ce the kinetic vector model into an ODE system. We also obtain the cross-r
 atio type constant of motion functionals for kinetic vector models under s
 uitable conditions. We present the sufficient and necessary conditions for
  the existence of the suggested constant of motion functionals. As an appl
 ication of the constant of motion functional\, we provide the instability 
 of bipolar states of the kinetic swarm sphere model. We also provide the W
 S-transform and constant of motion functionals for non-identical kinetic v
 ector models.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Justin Solomon (MIT)
DTSTART:20221021T220000Z
DTEND:20221021T230000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/6
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 6/">Volumetric Methods for Modeling\, Deformation\, and Correspondence</a>
 \nby Justin Solomon (MIT) as part of SFU Mathematics of Computation\, Appl
 ication and Data ("MOCAD") Seminar\n\n\nAbstract\nIn 3D modeling\, medical
  imaging\, and other disciplines\, popular techniques for geometry process
 ing often rely on mathematical models for surface geometry\, viewing shape
 s as thin sheets embedded in $\\mathbb{R}^3$\; this construction neglects 
 the fact that many of these surfaces are "boundary representations\," inte
 nded to represent boundaries of volumes.  As an alternative\, in this talk
  we will explore how calculations on the extrinsic space around a surface 
 can benefit geometry processing applications---as well as the mathematical
 \, numerical\, and computational challenges of this extension to three dim
 ensions.  Our algorithms for these problems will build on machinery from d
 ifferential geometry\, geometric measure theory\, vector field design\, an
 d machine learning.\n\n(Joint work with several members of the MIT Geometr
 ic Data Processing Group.)\n
LOCATION:https://researchseminars.org/talk/AppliedMath/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Wiedemann (Universitaet Augsburg)
DTSTART:20220916T223000Z
DTEND:20220917T000000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/11
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 11/">Homogenization in evolving porous media</a>\nby David Wiedemann (Univ
 ersitaet Augsburg) as part of SFU Mathematics of Computation\, Application
  and Data ("MOCAD") Seminar\n\n\nAbstract\nNumerical simulations of physic
 al or chemical processes in heterogeneous \nmedia require a resolution of 
 the heterogeneous structure. If\, however\, \nthis heterogeneity is micros
 copically small while the object under \nconsideration is large\, a dimens
 ional mismatch occurs and classical \nnumerical methods become infeasible.
 \n\nAt this point\, analytical homogenization provides effective homogeneo
 us \nsubstitute models\, which can be simulated numerically much more easi
 ly. \nOne class of problems that can be treated are processes in porous me
 dia. \nIn many biological or chemical applications\, the pore structure ev
 olves \nin time\, which impedes classical homogenization. By means of the 
 \ntwo-scale transformation method\, we can overcome this difficulty and \n
 derive new effective models for problems in evolving heterogeneous media.\
 n
LOCATION:https://researchseminars.org/talk/AppliedMath/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jingwei Hu (University of Washington)
DTSTART:20220923T223000Z
DTEND:20220924T000000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/12
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 12/">Dynamical low-rank methods for high-dimensional collisional kinetic e
 quations</a>\nby Jingwei Hu (University of Washington) as part of SFU Math
 ematics of Computation\, Application and Data ("MOCAD") Seminar\n\n\nAbstr
 act\nKinetic equations describe the nonequilibrium dynamics of a complex s
 ystem using a probability density function. Despite of their important rol
 e in multiscale modeling to bridge microscopic and macroscopic scales\, nu
 merically solving kinetic equations is computationally demanding as they l
 ie in the six-dimensional phase space. Dynamical low-rank method is a dime
 nsion-reduction technique that has been recently applied to kinetic theory
 \, yet most of the endeavor is devoted to linear or collisionless problems
 . In this talk\, we introduce efficient dynamical low-rank methods for Bol
 tzmann type collisional kinetic equations\, building on certain prior know
 ledge about the low-rank structure of the solution.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mark Iwen (Michigan State University)
DTSTART:20221207T233000Z
DTEND:20221208T003000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/15
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 15/">Low-Distortion Embeddings of Submanifolds of $R^n$: Lower Bounds\, Fa
 ster Realizations\, and Applications</a>\nby Mark Iwen (Michigan State Uni
 versity) as part of SFU Mathematics of Computation\, Application and Data 
 ("MOCAD") Seminar\n\nLecture held in ASB10908.\n\nAbstract\nLet M be a smo
 oth submanifold of R^n equipped with the Euclidean(chordal) metric. This t
 alk will consider the smallest dimension\, m\, for which there exists a bi
 -Lipschitz function f:M →R^m with bi-Lipschitz constants close to one. W
 e will begin by presenting a bound for the embedding dimension m from belo
 w in terms of the bi-Lipschitz constants of f and the reach\, volume\, dia
 meter\, and dimension of M. We will then discuss how this lower bound can 
 be applied to show that prior upper bounds by Eftekhari and Wakin on the m
 inimal low-distortion embedding dimension of such manifolds using random m
 atrices achieve near-optimal dependence on dimension\, reach\, and volume 
 (even when compared against nonlinear competitors). Next\, we will discuss
  a new class of linear maps for embedding arbitrary (infinite) subsets of 
 R^n with sufficiently small Gaussian width which can both (i) achieve near
 -optimal embedding dimensions of submanifolds\, and (ii) be multiplied by 
 vectors in faster than FFT-time. When applied to d-dimensional submanifold
 s of R^n we will see that these new constructions improve on prior fast em
 bedding matrices in terms of both runtime and embedding dimension when d i
 s sufficiently small. Time permitting\, we will then conclude with a discu
 ssion of non-linear so-called “terminal embeddings” of manifolds which
  allow for extensions of the famous Johnson-Lindenstrauss Lemma beyond wha
 t any linear map can achieve.\n\nThis talk will draw on joint work with va
 rious subsets of Mark Roach (MSU)\, Benjamin Schmidt (MSU)\, and Arman Tav
 akoli (MSU).\n
LOCATION:https://researchseminars.org/talk/AppliedMath/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Robert Corless (University of Western Ontario)
DTSTART:20221012T223000Z
DTEND:20221012T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/16
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 16/">Compact cubic splines and compact finite differences</a>\nby Robert C
 orless (University of Western Ontario) as part of SFU Mathematics of Compu
 tation\, Application and Data ("MOCAD") Seminar\n\nLecture held in K9509.\
 n\nAbstract\nIn this paper we introduce an apparently new spline-like inte
 rpolant that we call a compact cubic interpolant or compact cubic spline\;
  this is similar to a cubic spline introduced in 1972 by Swartz and Varga\
 , but has higher order accuracy at the edges. We argue that for nearly uni
 form meshes the compact cubic approach offers some potential advantages\, 
 and offers a simple way to treat the edge conditions\, relieving the user 
 of the burden of deciding to use one of the three standard options: free (
 natural)\, complete (clamped)\, or “not-a-knot” conditions. Finally\, 
 we establish that the matrices defining the compact cubic splines (equival
 ently\, the fourth-order compact finite difference formulæ) are totally n
 onnegative\, if all mesh widths are the same sign\, for instance if the me
 sh is real and nodes are numbered in increasing order.\n\nThe talk will be
  in-person and use chalk\, in the wonderful multi-board room that SFU has 
 for the purpose.  The YouTube version linked above was a computer version 
 of the same talk\, with slides\, which has some advantages (run it at doub
 le speed!).  But the chalk version offers a chance to slow down and apprec
 iate more of the "big picture".  The topic will be accessible if the liste
 ner has heard what a "spline" is\, but the main point is to prove total no
 nnegativity of a certain tridiagonal matrix.  I'll also make a connection 
 to the (very useful) subject of compact finite differences.\n\nThis is joi
 nt work with Dr. Leili Rafiee Sevyeri (CS University of Waterloo)\n
LOCATION:https://researchseminars.org/talk/AppliedMath/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:TBA
DTSTART:20230106T233000Z
DTEND:20230107T003000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/18
DESCRIPTION:by TBA as part of SFU Mathematics of Computation\, Application
  and Data ("MOCAD") Seminar\n\nLecture held in K9509.\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/AppliedMath/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ruiwen Shu (University of Georgia)
DTSTART:20230120T233000Z
DTEND:20230121T003000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/19
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 19/">Global Minimizers of a Large Class of Anisotropic Attractive-Repulsiv
 e Interaction Energies in 2D</a>\nby Ruiwen Shu (University of Georgia) as
  part of SFU Mathematics of Computation\, Application and Data ("MOCAD") S
 eminar\n\n\nAbstract\nI will discuss my joint work with José Carrillo on 
 a large family of Riesz-type singular interaction potentials with anisotro
 py in two dimensions. Their associated global energy minimizers are given 
 by explicit formulas whose supports are determined by ellipses under certa
 in assumptions. More precisely\, by parameterizing the strength of the ani
 sotropic part we characterize the sharp range in which these explicit elli
 pse-supported configurations are the global minimizers based on linear con
 vexity arguments. Moreover\, for certain anisotropic parts\, we prove that
  for large values of the parameter the global minimizer is only given by v
 ertically concentrated measures corresponding to one dimensional minimizer
 s. We also show that these ellipse-supported configurations generically do
  not collapse to a vertically concentrated measure at the critical value f
 or convexity\, leading to an interesting gap of the parameters in between.
  In this intermediate range\, we conclude by infinitesimal concavity that 
 any superlevel set of any local minimizer in a suitable sense does not hav
 e interior points. Furthermore\, for certain anisotropic parts\, their sup
 port cannot contain any vertical segment for a restricted range of paramet
 ers\, and moreover the global minimizers are expected to exhibit a zigzag 
 behavior. All these results hold for the limiting case of the logarithmic 
 repulsive potential\, extending and generalizing previous results in the l
 iterature.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Matias Delgadino (UT Austin)
DTSTART:20230217T233000Z
DTEND:20230218T003000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/24
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 24/">Phase transitions and log Sobolev inequalities</a>\nby Matias Delgadi
 no (UT Austin) as part of SFU Mathematics of Computation\, Application and
  Data ("MOCAD") Seminar\n\nLecture held in Remote.\n\nAbstract\nIn this ta
 lk\, we will study the mean field limit of weakly interacting diffusions f
 or confining and interaction potentials that are not necessarily convex. W
 e explore the relationship between the large N limit of the constant in th
 e logarithmic Sobolev inequality (LSI) for the N-particle system\, and the
  presence or absence of phase transitions for the mean field limit. The no
 n-degeneracy of the LSI constant will be shown to have far reaching conseq
 uences\, especially in the context of uniform-in-time propagation of chaos
  and the behaviour of equilibrium fluctuations. This will be done by emplo
 ying techniques from the theory of gradient flows in the 2-Wasserstein dis
 tance\, specifically the Riemannian calculus on the space of probability m
 easures.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Maria Pia Gualdani (UT Austin)
DTSTART:20230314T223000Z
DTEND:20230314T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/27
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 27/">Recent progresses in kinetic equations.</a>\nby Maria Pia Gualdani (U
 T Austin) as part of SFU Mathematics of Computation\, Application and Data
  ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nWe will discuss 
 recent mathematical results for the Landau and Boltzmann equation.  Kineti
 c equations are used to describe evolution of interacting particles. The m
 ost famous kinetic equation is the Boltzmann equation: formulated by Ludwi
 g Boltzmann in 1872\, this equation describes motion of a large class of g
 ases. Later\, in 1936\, Lev Landau derived a new mathematical model for mo
 tion of plasma. This latter equation was named the Landau equation. While 
 many important questions are still partially unanswered due to their mathe
 matical complexity\, many others have been solved thanks to novel combinat
 ions of analytical techniques\, in particular the ones developed by Hoerma
 nder\, J. Nash\, E. De Giorgi and Moser.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nathan King (The Cheriton School of Computer Science\, University 
 of Waterloo)
DTSTART:20230324T223000Z
DTEND:20230324T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/28
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 28/">A Closest Point Method with Interior Boundary Conditions for Geometry
  Processing</a>\nby Nathan King (The Cheriton School of Computer Science\,
  University of Waterloo) as part of SFU Mathematics of Computation\, Appli
 cation and Data ("MOCAD") Seminar\n\nLecture held in AQ5008.\n\nAbstract\n
 Many geometry processing tasks can be performed by solving partial differe
 ntial equations (PDEs) on surfaces. These PDEs usually involve boundary co
 nditions (e.g.\, Dirichlet or Neumann) defined anywhere on the surface\, n
 ot just on the physical (exterior) boundary of an open surface. This talk 
 discusses how to handle BCs on the interior of a surface while solving PDE
 s with the closest point method (CPM).\n\nThe CPM is an embedding method\,
  i.e.\, it solves the surface PDE by solving a PDE defined in a space surr
 ounding the surface. The PDE is commonly solved using standard Cartesian n
 umerical methods (e.g.\, finite-differences and Lagrange interpolation). C
 omplex surfaces with high-curvatures and/or thin regions impose restrictio
 ns on the size of the embedding space. Therefore\, for complex surfaces\, 
 fine resolution grids must be used to fit within the embedding space. We d
 evelop a matrix-free solver that can scale to millions of degrees of freed
 om to allow for PDEs to be solved on complex shapes.\n\nOur use of a close
 st point surface representation provides a general framework to handle any
  surface that allows closest point computation\, e.g.\, parametrizations\,
  point clouds\, level-sets\, neural implicits\, etc. The surface can be op
 en or closed\, orientable or not\, of any codimension\, and even mixed-cod
 imension. Therefore\, the approach presented provides a general framework 
 for geometry processing on complex surfaces given by general surface repre
 sentations.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/28/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Aleks Donev (Courant Institute\, NYU)
DTSTART:20230519T223000Z
DTEND:20230519T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/32
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 32/">Hydrodynamics and rheology of fluctuating\, semiflexible\, inextensib
 le\, and slender filaments in Stokes flow</a>\nby Aleks Donev (Courant Ins
 titute\, NYU) as part of SFU Mathematics of Computation\, Application and 
 Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nEvery animal
  cell is filled with a cytoskeleton\, a dynamic gel made of inextensible f
 ilaments / bio-polymers\, such as microtubules\, actin filaments\, and int
 ermediate filaments\, all suspended in a viscous fluid. Similar suspension
 s of elastic filaments or polymers are widely used in materials processing
 . Numerical simulation of such gels is challenging because the filament as
 pect ratios are very large.\n\nWe have recently developed new methods for 
 rapidly computing the dynamics of non-Brownian and Brownian inextensible s
 lender filaments in periodically-sheared Stokes flow [1\,2\,4]. We apply o
 ur formulation to a permanently1 and dynamically cross-linked actin mesh3 
 in a background oscillatory shear flow. We find that nonlocal hydrodynamic
 s can change the visco-elastic moduli by as much as 40% at certain frequen
 cies\, especially in partially bundled networks [3\,4].\n\nI will focus on
  accounting for bending thermal fluctuations of the filaments by first est
 ablishing a mathematical formulation and numerical methods for simulating 
 the dynamics of stiff but not rigid Brownian fibers in Stokes flow [4]. I 
 will emphasize open questions for the community such as whether there is a
  continuum limit of the Brownian contribution to the stress tensor from th
 e filaments.\n\nThis is joint work with Ondrej Maxian and Brennan Sprinkle
 .\n\nReferences:\n\n1. O. Maxian et al\, Integral-based spectral method fo
 r inextensible slender fibers in Stokes flow\,. Phys. Rev. Fluids\, 6:0141
 02\, 2021\n2. O. Maxian et al\,. Hydrodynamics of a twisting\, bending\, i
 nextensible fiber in Stokes flow\, Phys. Rev. Fluids\, 7:074101\, 2022\n3.
  O. Maxian et al\, Interplay between Brownian motion and cross-linking con
 trols bundling dynamics in actin networks\, Biophysical J.\, 121:1230–12
 45\, 2022.\n4. O. Maxian et al.\, Bending fluctuations in semiflexible\, i
 nextensible\, slender filaments in Stokes flow: towards a spectral discret
 ization\, ArXiv:2301.11123\, to appear in J. Chem. Phys.\, 2023.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/32/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Anotida Madzvamuse (UBC)
DTSTART:20230922T223000Z
DTEND:20230922T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/33
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 33/">Image-based modelling using geometric surface PDEs for single and col
 lective cell migration</a>\nby Anotida Madzvamuse (UBC) as part of SFU Mat
 hematics of Computation\, Application and Data ("MOCAD") Seminar\n\nLectur
 e held in K9509.\n\nAbstract\nIn this lecture\, I will focus on formulatin
 g a dynamical geometric surface partial differential equation for modellin
 g static images during the process of single or collective\ncell migration
 . In the absence of detailed experimental molecular and mechanical observa
 tions\,\na question asked by experimentalists is: Given a sequence of imag
 es following\nsingle or collective cell migration\, is there an optimal dy
 namic mathematical model that evolves static images at one time point into
  static images at a later time point? I will employ both sharp- and diffus
 e-interface formulations based on phase-fields for geometric surface parti
 al differential equations to derive a dynamical spatiotemporal model for t
 he migration of cells in 2- and 3-D. The model is solved efficiently using
  novel high performance computing techniques based on finite differences\,
  and multi-grid methods. Such an approach\nallows us to solve in realistic
  times\, 2- and 3-D computations which are otherwise unfeasible\nwithout s
 uch innovative numerical analysis computing strategies. To demonstrate the
 \napplicability of the computational algorithm\, cell migration forces suc
 h as polarisation\nwill be exhibited. A by-product of the computational al
 gorithm is its ability to quantify\nautomatically cell proliferation rates
  which are generally obtained through cumbersome\nand error-prone manual c
 ounting.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/33/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Miranda Holmes-Cerfon (UBC)
DTSTART:20231027T223000Z
DTEND:20231027T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/34
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 34/">Numerically simulating particles with short-ranged interactions</a>\n
 by Miranda Holmes-Cerfon (UBC) as part of SFU Mathematics of Computation\,
  Application and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstr
 act\nParticles with diameters of nanometres to micrometres form the buildi
 ng blocks of many of the materials around us\, and can be designed in a mu
 ltitude of ways to form new ones. Such particles commonly live in fluids\,
  where they jiggle about randomly because of thermal fluctuations in the f
 luid\, and interact with each other via numerous mechanisms. One challenge
  in simulating such particles is that the range over which they interact a
 ttractively is often much shorter than their diameters\, so the equations 
 describing the particles’ dynamics are stiff\, requiring timesteps much 
 smaller than the timescales of interest. I will introduce methods to accel
 erate these simulations\, which instead solve the limiting equations as th
 e range of the attractive interaction goes to zero. In this limit a system
  of particles is described by a diffusion process on a collection of manif
 olds of different dimensions\, connected by “sticky” boundary conditio
 ns. I will describe our progress in simulating low-dimensional sticky diff
 usion processes\, explain how these algorithms give us insight into sticky
  diffusions’ unusual mathematical properties\, and then discuss some ong
 oing challenges such as extending these methods to high dimensions\, incor
 porating friction and hydrodynamic interactions\, and capturing the anomal
 ous diffusion that is sometimes observed experimentally.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/34/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Blaise Bourdin (McMaster University)
DTSTART:20231103T223000Z
DTEND:20231103T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/35
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 35/">Recent developments in variational and phase-field models of brittle 
 fracture</a>\nby Blaise Bourdin (McMaster University) as part of SFU Mathe
 matics of Computation\, Application and Data ("MOCAD") Seminar\n\nLecture 
 held in K9509.\n\nAbstract\nVariational phase-field models of fracture hav
 e been at the center of a multidisciplinary effort involving a large commu
 nity of mathematicians\, mechanicians\, engineers\, and computational scie
 ntists over the last 25 years or so. I will start with a modern interpreta
 tion of Griffith's classical criterion as a variational principle for a fr
 ee discontinuity energy and will recall some of the milestones in its anal
 ysis. Then\, I will introduce the phase-field approximation per se and des
 cribe its numerical implementation. I illustrate how phase-field models ha
 ve led to major breakthroughs in the predictive simulation of fracture in 
 complex situations. I then will turn my attention to current issues\, incl
 uding crack nucleation in nominally brittle materials\, fracture of hetero
 geneous materials\, and inverse problems.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/35/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Smith (Yale-NUS College)
DTSTART:20230929T223000Z
DTEND:20230929T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/36
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 36/">Fokas Diagonalization</a>\nby David Smith (Yale-NUS College) as part 
 of SFU Mathematics of Computation\, Application and Data ("MOCAD") Seminar
 \n\nLecture held in K9509.\n\nAbstract\nWe describe a new form of diagonal
 ization for linear two point constant coefficient differential operators w
 ith arbitrary linear boundary conditions. Although the diagonalization is 
 in a weaker sense than that usually employed to solve initial boundary val
 ue problems (IBVP)\, we show that it is sufficient to solve IBVP whose spa
 tial parts are described by such operators. We argue that the method descr
 ibed may be viewed as a reimplementation of the Fokas transform method for
  linear evolution equations on the finite interval. The results are extend
 ed to multipoint and interface operators\, including operators defined on 
 networks of finite intervals\, in which the coefficients of the differenti
 al operator may vary between subintervals\, and arbitrary interface and bo
 undary conditions may be imposed\; differential operators with piecewise c
 onstant coefficients are thus included.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/36/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Argyrios Petras (Johann Radon Institute for Computational and Appl
 ied Mathematics)
DTSTART:20231011T223000Z
DTEND:20231011T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/37
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 37/">Numerical methods for the solution of PDEs on static and moving surfa
 ces</a>\nby Argyrios Petras (Johann Radon Institute for Computational and 
 Applied Mathematics) as part of SFU Mathematics of Computation\, Applicati
 on and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nParti
 al differential equations (PDEs) on surfaces arise throughout the natural 
 and applied sciences. The solution of such equations poses a big challenge
  for rather general surfaces\, where no parametrization is possible. In th
 is talk\, we will give an overview of some methods that are based on the c
 losest point concept and use finite difference stencils based on radial ba
 sis functions (RBF-FD).\n
LOCATION:https://researchseminars.org/talk/AppliedMath/37/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Stephanie Ross (University of Calgary)
DTSTART:20231020T223000Z
DTEND:20231020T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/38
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 38/">A multimodal approach to understanding skeletal muscle mechanics in h
 ealth and disease</a>\nby Stephanie Ross (University of Calgary) as part o
 f SFU Mathematics of Computation\, Application and Data ("MOCAD") Seminar\
 n\nLecture held in K9509.\n\nAbstract\nSkeletal muscle is the motor that d
 rives human and animal movement\; however\, our understanding of how muscl
 e performs this function is limited because of challenges in directly meas
 uring muscle deformation and force output in living beings. In this talk\,
  I will share my previous work using continuum models of muscle and comple
 mentary experimental measures to determine the mechanisms underlying skele
 tal muscle function. I will then present my current research that extends 
 on this fundamental work to probe how changes in the material properties o
 f muscle with diseases such as stroke and cerebral palsy impact muscle fun
 ction and mobility.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/38/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Christoph Ortner (UBC)
DTSTART:20231006T223000Z
DTEND:20231006T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/39
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 39/">Geometric Shallow Learning with the Atomic Cluster Expansion (or\, Ef
 ficient Parameterization of Many-body Interaction)</a>\nby Christoph Ortne
 r (UBC) as part of SFU Mathematics of Computation\, Application and Data (
 "MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nAlthough my talk i
 s arguably about machine-learning\, I will use mostly ideas and language f
 rom mathematical modelling and numerical analysis. I will introduce a natu
 ral geometric learning framework\, the atomic cluster expansion (ACE)\,  w
 hich focuses on linear and shallow models\, and adds a new dimension to th
 e design space of geometric deep learning. ACE is particularly well-suited
  for parameterising surrogate models of particle systems where it is impor
 tant to incorporate symmetries and geometric priors into models without sa
 crificing systematic improvability.\nMy main focus will be on “learning
 ” interatomic potentials (or\, force fields): in this context\, ACE mode
 ls arise naturally from a few systematic modelling and approximation theor
 etic steps that can be made reasonably rigorous.\nHowever\, the applicabil
 ity is much broader and\, time permitting\, I will also show how the ACE f
 ramework can be adapted to other contexts such as electronic structure (pa
 rameterising Hamiltonians)\, quantum chemistry (wave functions)\, or eleme
 ntary particle physics (e.g.\, jet tagging).\n
LOCATION:https://researchseminars.org/talk/AppliedMath/39/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sam Stechmann (University of Wisconsin-Madison)
DTSTART:20240306T233000Z
DTEND:20240307T003000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/40
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 40/">Element learning: a systematic approach of accelerating finite elemen
 t-type methods via machine learning</a>\nby Sam Stechmann (University of W
 isconsin-Madison) as part of SFU Mathematics of Computation\, Application 
 and Data ("MOCAD") Seminar\n\nLecture held in SFU K9509.\n\nAbstract\nIn t
 he past decade\, (artificial) neural networks and machine learning tools h
 ave surfaced as game changing technologies across numerous fields\, resolv
 ing an array of challenging problems. Even for the numerical solution of p
 artial differential equations (PDEs) or other scientific computing problem
 s\, results have shown that machine learning can speed up some computation
 s. However\, many machine learning approaches tend to lose some of the adv
 antageous features of traditional numerical PDE methods\, such as interpre
 tability and applicability to general domains with complex geometry.\n\nIn
  this talk\, we introduce a systematic approach (which we call element lea
 rning) with the goal of accelerating finite element-type methods via machi
 ne learning\, while also retaining the desirable features of finite elemen
 t methods. The derivation of this new approach is closely related to hybri
 dizable discontinuous Galerkin (HDG) methods in the sense that the local s
 olvers of HDG are replaced by machine learning approaches. Numerical tests
  are presented for an example PDE\, the radiative transfer equation\, in a
  variety of scenarios with idealized or realistic cloud fields\, with smoo
 th or sharp gradient in the cloud boundary transition. Comparisons are set
  up with either a fixed number of degrees of freedom or a fixed accuracy l
 evel of $10^{-3}$ in the relative $L^2$ error\, and we observe a significa
 nt speed-up with element learning compared to a classical finite element-t
 ype method.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/40/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Charles Cheung (NVIDIA)
DTSTART:20231023T223000Z
DTEND:20231023T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/41
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 41/">Generative AI and AI for Science and Mathematics</a>\nby Charles Cheu
 ng (NVIDIA) as part of SFU Mathematics of Computation\, Application and Da
 ta ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nIn this talk\,
  I will talk about a few directions and use cases of recent Generative AI 
 development for metaverse and science. In the second part of the talk\, I 
 will talk about PINNs and neural operator that have been used for solving 
 many engineering problems that involves differential equations with neural
  network. We will walk through the basic concept of PINNs and neural opera
 tor and introduce NVIDIA modulus\, an SDK for training PINNs and neural op
 erator.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/41/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chunyi Gai (UBC)
DTSTART:20231121T233000Z
DTEND:20231122T003000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/42
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 42/">Pattern formation and Spike Dynamics in the Presence of Noise</a>\nby
  Chunyi Gai (UBC) as part of SFU Mathematics of Computation\, Application 
 and Data ("MOCAD") Seminar\n\nLecture held in ASB10908.\n\nAbstract\nNoise
  plays a crucial role in the formation and evolution of spatial patterns i
 n various reaction-diffusion systems in mathematical biology and ecology. 
 In this talk\, I give two examples where noise significantly influences sp
 atial patterning.  The first example describes how patterned states can pr
 ovide a refuge and prevent extinction under stressed conditions. It also i
 llustrates the importance of not only the absolute level of climate change
 \, but also the speed with which it occurs. The second example studies the
  effect of noise on dynamics of a single spike pattern for the classical G
 ierer--Meinhardt model on a finite interval.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/42/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Liam Madden (UBC)
DTSTART:20240126T233000Z
DTEND:20240127T003000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/43
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 43/">Memory capacity of two-layer neural networks</a>\nby Liam Madden (UBC
 ) as part of SFU Mathematics of Computation\, Application and Data ("MOCAD
 ") Seminar\n\nLecture held in K9509.\n\nAbstract\nThe memory capacity of a
  statistical model is the largest size of generic data that the model can 
 memorize and has important implications for both training and generalizati
 on. In this talk\, we will prove a tight memory capacity result for two-la
 yer neural networks with general activations. In order to do so\, we will 
 use tools from linear algebra\, combinatorics\, differential topology\, an
 d the theory of real analytic functions of several variables. In particula
 r\, we will show how to get memorization if the model is a local submersio
 n and we will show that the Jacobian has generically full rank. The perspe
 ctive that is developed also opens up a path towards deeper architectures\
 , alternative models\, and training.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/43/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hansol Park (SFU)
DTSTART:20231201T233000Z
DTEND:20231202T003000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/44
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 44/">Emergent behavior of mathematical models on manifolds</a>\nby Hansol 
 Park (SFU) as part of SFU Mathematics of Computation\, Application and Dat
 a ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nIn this talk\, 
 I introduce several first- and second-order models for self-collective beh
 aviour on general manifolds and discuss their emergent behaviors. For the 
 first-order model\, we consider attractive-repulsive and purely attractive
  interaction potentials\, and investigate the equilibria and the asymptoti
 c behaviour of the solutions. In particular\, we quantify the approach to 
 asymptotic consensus in terms of the convergence rate of the diameter of t
 he solution’s support. For the second-order model (known as the Cucker-S
 male model)\, velocity alignment interactions are considered. To analyze t
 he emergent behaviors of the two models\, the LaSalle invariance principle
  is used. Also\, various geometric tools used to analyze the aggregation m
 odels on manifolds are presented.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/44/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Craig Fraser (University of Toronto)
DTSTART:20231211T230000Z
DTEND:20231212T000000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/45
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 45/">The Clebsch-Mayer Theory of the Second Variation in the Calculus of V
 ariations: A Case Study in the Influence of Dynamical Analysis on Pure Mat
 hematics</a>\nby Craig Fraser (University of Toronto) as part of SFU Mathe
 matics of Computation\, Application and Data ("MOCAD") Seminar\n\nLecture 
 held in SFU AQ5025.\n\nAbstract\nCarl Jacobi worked in the 1830s at the Un
 iversity of Königsberg on what became known as Hamilton-Jacobi theory\, a
 nd also on the theory of the second variation in the calculus of variation
 s. The first was a subject in dynamical analysis\, while the second was a 
 subject in pure mathematics. Insofar as the calculus of variations was con
 cerned\, Jacobi’s contributions were seminal and highly original but pre
 sented in an incomplete and programmatic form. Together his writings stimu
 lated active but independent traditions of research in both subjects. In t
 he late 1850s and 1860s Alfred Clebsch and Adolph Mayer – mathematicians
  associated with the Königsberg school - established a new approach to th
 e investigation of sufficient conditions in the calculus of variations by 
 bringing methods from Hamilton-Jacobi theory to bear on the transformation
  of the second variation. In doing so they established the basis for resea
 rch on the subject that was eventually codified in writings around 1900 of
  Camille Jordan\, Gustav von Escherich and Oskar Bolza.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/45/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Timon S. Gutleb (UBC)
DTSTART:20240216T233000Z
DTEND:20240217T003000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/46
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 46/">A frame approach for equations involving the fractional Laplacian</a>
 \nby Timon S. Gutleb (UBC) as part of SFU Mathematics of Computation\, App
 lication and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\
 nI will be presenting a frame approach for computing solutions of differen
 tial equations inspired by recent progress in frame theory and sparse spec
 tral methods. The primary case study for our method will be a very general
  family of equations involving the fractional Laplacian.\n\nThis is joint 
 work with I. Papadopoulos\, J.A. Carrillo and S. Olver.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/46/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Lisa Kreusser (University of Bath)
DTSTART:20240226T233000Z
DTEND:20240227T003000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/47
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 47/">Unlocking the Full Potential of Data: From Applied Analysis and Optim
 isation to Applications</a>\nby Lisa Kreusser (University of Bath) as part
  of SFU Mathematics of Computation\, Application and Data ("MOCAD") Semina
 r\n\nLecture held in K9509.\n\nAbstract\nRecent and rapid breakthroughs in
  contemporary biology\, climate science\, and data science have unveiled a
  spectrum of intricate mathematical challenges which can be tackled throug
 h the fusion of applied and numerical analysis\, as well as optimisation. 
 In this talk\, I will begin by delving into a class of interacting particl
 e models with anisotropic interaction forces and their corresponding conti
 nuum limit. These models find their inspiration in the simulation of finge
 rprint patterns\, which play a critical role in databases in forensic scie
 nce and biometric applications. I will showcase our recent findings\, incl
 uding the development of a mean-field optimal control algorithm to tackle 
 an inverse problem arising in parameter identification. Transitioning from
  interaction-focused models to the realm of transport networks\, I will in
 troduce an optimization approach tailored for a unique coupling of differe
 ntial equations that arises in the context of biological network formation
 . Additionally\, I will provide insights into my recent research in data s
 cience\, encompassing topics such as image segmentation\, non-convex optim
 isation algorithms for machine learning\, Wasserstein Generative Adversari
 al Networks (WGANs)\, score-based diffusion models and semi-supervised lea
 rning techniques.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/47/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Silas Polani
DTSTART:20240318T223000Z
DTEND:20240318T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/48
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 48/">Intraguild Predation in homogeneous and heterogeneous landscapes</a>\
 nby Silas Polani as part of SFU Mathematics of Computation\, Application a
 nd Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hybrid.\n\nAbstract
 \nIntraguild predation (IGP) consists of two (or more) consumers of the sa
 me shared resource exhibiting a predator-prey relation among themselves\, 
 and  is a very present phenomena in terrestrial\, freshwater and marine ec
 ological systems. Theoretical works show that IGP allows for coexistence b
 etween two consumers of the same guild\, as long as IG prey is a more effe
 ctive consumer than IG predator\, revealing an important mechanism for con
 sumer coexistence in food chains. Here we explore biological invasions for
 ming IGP communities\, by either introducing IG prey or IG predator to est
 ablished (single) Consumer-Resource populations in homogeneous and heterog
 eneous landscapes. We use reaction-diffusion equations as our modeling fra
 mework\, and explore them through numerical simulations and homogenization
  techniques. In homogeneous landscapes\, we find that asymptotic spreading
  speeds are linearly determinate and also that the formation of traveling 
 wave solutions and dynamical stabilization regimes are possible. On hetero
 geneous landscapes\, we find that coexistence regimes in highly heterogene
 ous landscapes can occur regardless of IG-Prey being the least effective c
 onsumer\, or be hindered even when IG-Prey remains as the dominant competi
 tor\, depending on habitat preferences of each of the species involved. We
  provide some conclusions of the work and venues of future research.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/48/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Siting Liu (University of California\, Los Angeles)
DTSTART:20240308T233000Z
DTEND:20240309T003000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/49
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 49/">An inverse problem in mean field game from partial boundary measureme
 nt</a>\nby Siting Liu (University of California\, Los Angeles) as part of 
 SFU Mathematics of Computation\, Application and Data ("MOCAD") Seminar\n\
 nLecture held in K9509 and Hybrid.\n\nAbstract\nMean-field game (MFG) syst
 ems provide a powerful framework for modeling the collective behavior of m
 ulti-agent systems with diverse applications. However\, unknown parameters
  pose challenges. In this work\, we tackle an inverse problem\, recovering
  MFG parameters from limited\, noisy boundary observations. Despite the pr
 oblem's ill-posed nature\, we aim to efficiently retrieve these parameters
  to understand population dynamics. Our focus is on recovering running cos
 t and interaction energy in MFG equations from boundary measurements. We f
 ormalize the problem as a constrained optimization problem with L1 regular
 ization. We then develop a fast and robust operator splitting algorithm to
  solve the optimization using techniques\, including harmonic extensions\,
  a three-operator splitting scheme\, and the primal-dual hybrid gradient m
 ethod. Numerical experiments illustrate the effectiveness and robustness o
 f the algorithm. This is joint work with Yat Tin Chow (UCR)\, Samy Wu Fung
  (Colorado School of Mines)\, Levon Nurbekyan (Emory)\, and Stanley J. Osh
 er (UCLA).\n
LOCATION:https://researchseminars.org/talk/AppliedMath/49/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nicolas Boullé (Imperial College London)
DTSTART:20241108T230000Z
DTEND:20241109T003000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/50
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 50/">Elliptic PDE learning is data-efficient</a>\nby Nicolas Boullé (Impe
 rial College London) as part of SFU Mathematics of Computation\, Applicati
 on and Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hybrid.\n\nAbst
 ract\nOperator learning is an emerging field at the intersection of machin
 e learning\, physics\, and mathematics\, that aims to discover properties 
 of unknown physical systems from experimental data. Popular techniques exp
 loit the approximation power of deep learning to learn solution operators\
 , which map source terms to solutions of the underlying PDE. Solution oper
 ators can then produce surrogate data for data-intensive machine learning 
 approaches such as learning reduced order models for design optimization i
 n engineering and PDE recovery. In most deep learning applications\, a lar
 ge amount of training data is needed\, which is often unrealistic in engin
 eering and biology. However\, PDE learning is shockingly data-efficient in
  practice. We provide a theoretical explanation for this behavior by const
 ructing an algorithm that recovers solution operators associated with elli
 ptic PDEs and achieves an exponential convergence rate with respect to the
  size of the training dataset. The proof technique combines prior knowledg
 e of PDE theory and randomized numerical linear algebra techniques and may
  lead to practical benefits such as improving dataset and neural network a
 rchitecture designs.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/50/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Gregor Maier (University of Bonn)
DTSTART:20240524T223000Z
DTEND:20240524T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/51
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 51/">On the Approximation of Gaussian Lipschitz Functionals</a>\nby Gregor
  Maier (University of Bonn) as part of SFU Mathematics of Computation\, Ap
 plication and Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hybrid.\
 n\nAbstract\nOver the past few years\, operator learning – the approxima
 tion of mappings between infinite-dimensional function spaces using ideas 
 from machine learning – has attracted increased research attention. Appr
 oximate operators\, learned from data\, hold promise to serve as efficient
  surrogate models for problems in scientific computing. Multiple model des
 igns have been proposed so far and their efficiency has been demonstrated 
 in various practical applications.\nThe empirical findings are supported b
 y a (slowly) growing body of theoretical approximation garantuees. The lat
 ter focus to a large extent on linear and holomorphic operators. However\,
  far less is known about the approximation of (nonlinear) operators which 
 are merely Lipschitz continuous. \n\nIn this talk\, I will focus on (scala
 r-valued) Lipschitz functionals in a Gaussian setting. I will first consid
 er their polynomial approximation by Hermite polynomials and present lower
  and upper bounds on the best $s$-term error. This will be followed by a d
 iscussion on the approximation of Lipschitz functionals by arbitrary (adap
 tive) sampling algorithms\, which will result in sharp error bounds. Final
 ly\, I will conclude by also addressing the problem of recovering Lipschit
 z functionals from i.i.d. pointwise samples.\n\nThis is joint work with Be
 n Adcock (SFU).\n
LOCATION:https://researchseminars.org/talk/AppliedMath/51/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Williams (Pennsylvania State University)
DTSTART:20240718T223000Z
DTEND:20240718T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/52
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 52/">Finite element exterior calculus in four-dimensional space</a>\nby Da
 vid Williams (Pennsylvania State University) as part of SFU Mathematics of
  Computation\, Application and Data ("MOCAD") Seminar\n\nLecture held in K
 9509 and Hybrid.\n\nAbstract\nThe purpose of this talk is to explain the k
 ey differences between standard finite element methods for 3D applications
 \, and space-time finite element methods for 4D applications. These differ
 ences are elucidated through the lens of finite element exterior calculus 
 (FEEC). Through FEEC\, we can leverage the language of differential geomet
 ry and algebraic topology to construct finite element spaces in any number
  of dimensions. In this work\, we use techniques from FEEC to construct de
 rivative operators in 3D and 4D space. We explain the differences between 
 these operators\, and the associated Sobolev spaces. Thereafter\, we const
 ruct conforming\, high-order\, finite element spaces on the tesseract\, pe
 ntatope\, and tetrahedral prism in 4D. These shapes are fundamental geomet
 ric quantities in 4D\, as they correspond to the four-dimensional analogs 
 of the cube\, tetrahedron\, and triangular prism\, respectively.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/52/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Xuefeng Liu (Tokyo Woman's Christian University)
DTSTART:20240906T220000Z
DTEND:20240906T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/53
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 53/">Rigorous evaluation of the Hadamard derivative for shape optimization
  problems</a>\nby Xuefeng Liu (Tokyo Woman's Christian University) as part
  of SFU Mathematics of Computation\, Application and Data ("MOCAD") Semina
 r\n\nLecture held in K9509 and Hybrid.\n\nAbstract\nThis talk introduces a
  newly developed computational method for rigorously evaluating the Hadama
 rd derivative of Laplacian eigenvalues\, which plays an important role in 
 studying shape optimization problems.\n\nTo evaluate the Hadamard derivati
 ve\, this method employs state-of-the-art algorithms for eigenvalues and e
 igenfunctions via the finite element method (Liu'2013\,2015\; Liu-Vejchods
 ky'2022)\, effectively handling cases of repeated or closely spaced eigenv
 alues.\n\nWe also present a computer-assisted proof for the optimization a
 nd simplicity of Laplacian eigenvalues over triangular domains (Endo-Liu'2
 023\,2024)\, demonstrating the impact of these computational advancements 
 in spectral geometry.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/53/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Isaac Harris (Purdue University)
DTSTART:20240913T220000Z
DTEND:20240913T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/54
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 54/">Transmission Eigenvalue Problems for a Scatterer with a Conductive Bo
 undary</a>\nby Isaac Harris (Purdue University) as part of SFU Mathematics
  of Computation\, Application and Data ("MOCAD") Seminar\n\nLecture held i
 n K9509 and Hybrid.\n\nAbstract\nIn this talk\, we will investigate the ac
 oustic transmission eigenvalue problem associated\nwith an inhomogeneous m
 edia with a conductive boundary. These are a new class\nof eigenvalue prob
 lems that are not elliptic\, not self-adjoint\, and non-linear\, which\ngi
 ves the possibility of complex eigenvalues. The talk will consider the cas
 e of an\nIsotropic and Anisotropic scatterer. We will discuss the existenc
 e of the eigenvalues as\nwell as their dependence on the material paramete
 rs. Because this is a non-standard\neigenvalue problem\, a discussion of t
 he numerical calculations will also be highlighted. Lastly\, we will discu
 ss recovering the scatterer using a monotonicity method that is independen
 t of the transmission eigenvalues.\nThis is joint work with: O. Bondarenko
 \, V. Hughes\, A. Kleefeld\, H. Lee\, and J. Sun.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/54/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Denis Grebenkov (CNRS - Ecole Polytechnique)
DTSTART:20240927T220000Z
DTEND:20240927T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/55
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 55/">Probabilistic insights on the Steklov spectral problem: theory\, nume
 rics and applications</a>\nby Denis Grebenkov (CNRS - Ecole Polytechnique)
  as part of SFU Mathematics of Computation\, Application and Data ("MOCAD"
 ) Seminar\n\nLecture held in K9509 and Hybrid.\n\nAbstract\nIn this overvi
 ew talk\, I will present the encounter-based approach to diffusive process
 es in Euclidean domains and highlight its fundamental relation to the Stek
 lov spectral problem. So\, the Steklov eigenfunctions turn out to be parti
 cularly useful for representing heat kernels with Robin boundary condition
  and disentangling diffusive dynamics from reaction events on the boundary
 . I will also discuss applications of this approach in physical chemistry 
 (to describe diffusion-controlled reactions) and in statistical physics (t
 o determine the statistics of encounters and various first-passage times).
  Some open questions related to spectral\, probabilistic and numerical asp
 ects of this spectral problem will be outlined.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/55/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wuyang Chen (Simon Fraser University)
DTSTART:20241101T220000Z
DTEND:20241101T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/56
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 56/">Towards Data-Efficient OOD Generalization of Scientific Machine Learn
 ing Models</a>\nby Wuyang Chen (Simon Fraser University) as part of SFU Ma
 thematics of Computation\, Application and Data ("MOCAD") Seminar\n\nLectu
 re held in K9509 and Hybrid.\n\nAbstract\nIn recent years\, there has been
  growing promise in coupling machine learning methods with domain-specific
  physical insights to solve scientific problems based on partial different
 ial equations (PDEs). However\, there are two critical bottlenecks that mu
 st be addressed before scientific machine learning (SciML) can become prac
 tically useful. First\, SciML requires extensive pretraining data to cover
  diverse physical systems and real-world scenarios. Second\, SciML models 
 often perform poorly when confronted with unseen data distributions that d
 eviate from the training source\, even when dealing with samples from the 
 same physical systems that have only slight differences in physical parame
 ters. In this line of work\, we aim to address these challenges using data
 -centric approaches. To enhance data efficiency\, we have developed the fi
 rst unsupervised learning method for neural operators. Our approach involv
 es mining unlabeled PDE data without relying on heavy numerical simulation
 s. We demonstrate that unsupervised pretraining can consistently reduce th
 e number of simulated samples required during fine-tuning across a wide ra
 nge of PDEs and real-world problems. Furthermore\, to evaluate and improve
  the out-of-distribution (OOD) generalization of neural operators\, we hav
 e carefully designed a benchmark that includes diverse physical parameters
  to emulate real-world scenarios. By evaluating popular architectures acro
 ss a broad spectrum of PDEs\, we conclude that neural operators achieve mo
 re robust OOD generalization when pretrained on physical dynamics with hig
 h-frequency patterns rather than smooth ones. This suggests that data-driv
 en SciML methods will benefit more from learning from challenging samples.
 \n
LOCATION:https://researchseminars.org/talk/AppliedMath/56/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Christina Runkel (University of Cambridge)
DTSTART:20240920T220000Z
DTEND:20240920T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/57
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 57/">Learning posterior distributions in underdetermined inverse problems<
 /a>\nby Christina Runkel (University of Cambridge) as part of SFU Mathemat
 ics of Computation\, Application and Data ("MOCAD") Seminar\n\nLecture hel
 d in K9509 and Hybrid.\n\nAbstract\nIn recent years\, classical knowledge-
 driven approaches for inverse problems have been complemented by data-driv
 en methods exploiting the power of machine and especially deep learning. P
 urely data-driven methods\, however\, come with the drawback of disregardi
 ng prior knowledge of the problem even though it has shown to be beneficia
 l to incorporate this knowledge into the problem-solving process.\n\nIn th
 is talk\, we introduce an unpaired learning approach for learning posterio
 r distributions of underdetermined inverse problems. It combines advantage
 s of deep generative modeling with established ideas of knowledge-driven a
 pproaches by incorporating prior information about the inverse problem. We
  develop a new neural network architecture ’UnDimFlow’ (short for Uneq
 ual Dimensionality Flow) consisting of two normalizing flows\, one from th
 e data to the latent\, and one from the latent to the solution space. Addi
 tionally\, we incorporate the forward operator to develop an unpaired lear
 ning method for the UnDimFlow architecture and propose a tailored point es
 timator to recover an optimal solution during inference. We evaluate our m
 ethod on the two underdetermined inverse problems of image inpainting and 
 super-resolution.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/57/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Antoine Cerfon (Type One Energy Group)
DTSTART:20241018T220000Z
DTEND:20241018T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/58
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 58/">Open math problems for optimized fusion reactors</a>\nby Antoine Cerf
 on (Type One Energy Group) as part of SFU Mathematics of Computation\, App
 lication and Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hybrid.\n
 \nAbstract\nStellarators are promising magnetic fusion devices for electri
 city generation\, because the dynamics of the hot fusion fuel - called a p
 lasma - is largely determined by external control\, as opposed to dynamica
 l self-organization\, as is the case for other magnetic fusion concepts. C
 omputer design and simulations reliably predict experimental performance\,
  which opens a lower risk and more cost efficient path to fusion power. In
  this talk\, I will present the mathematical challenges one faces when des
 igning stellarators with optimized performance. I will show how recent pro
 gress in our mathematical understanding of stellarators and in numerical m
 ethods for reactor optimization have led to the discovery of reactor desig
 ns with outstanding physical properties. I will also highlight open proble
 ms in pure mathematics\, scientific computing\, numerical optimization\, a
 nd reduced-order modeling\, whose solutions could further improve reactor 
 performance.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/58/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chiara Saffirio (UBC)
DTSTART:20241025T220000Z
DTEND:20241025T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/60
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 60/">Uniqueness criteria for the Vlasov-Poisson system and applications to
  semiclassical problems.</a>\nby Chiara Saffirio (UBC) as part of SFU Math
 ematics of Computation\, Application and Data ("MOCAD") Seminar\n\nLecture
  held in K9509 and Hybrid.\n\nAbstract\nThe Vlasov-Poisson system is a non
 -linear PDE describing the mean-field time-evolution of particles forming 
 a plasma or a galaxy.\nIn this talk I will present uniqueness criteria for
  the Vlasov-Poisson equation in the classical and semi-relativistic settin
 g\, emerging as corollaries of stability estimates in strong (L^p) topolog
 ies or in weak topologies (induced by Wasserstein distances)\, and show ho
 w they serve as a guideline to solve semiclassical problems. Different top
 ologies will allow us to treat different classes of quantum states.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/60/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Anjali Nair (University of Chicago)
DTSTART:20241115T230000Z
DTEND:20241116T003000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/61
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 61/">From Schrödinger to diffusion- speckle formation of light in random 
 media and the Gaussian conjecture</a>\nby Anjali Nair (University of Chica
 go) as part of SFU Mathematics of Computation\, Application and Data ("MOC
 AD") Seminar\n\nLecture held in K9509 and Hybrid.\n\nAbstract\nA well-know
 n conjecture in physical literature states that high frequency waves propa
 gating over long distances through turbulence eventually become complex Ga
 ussian distributed. The intensity of such wave fields then follows an expo
 nential law\, consistent with speckle formation observed in physical exper
 iments. Though fairly well-accepted and intuitive\, this conjecture is not
  entirely supported by any detailed mathematical derivation. In this talk\
 , I will discuss some recent results demonstrating the Gaussian conjecture
  in a weak-coupling regime of the paraxial approximation.\n\n \nThe paraxi
 al approximation is a high frequency approximation of the Helmholtz equati
 on\, where backscattering is ignored. This takes the form of a Schrödinge
 r equation with a random potential and is often used to model laser propag
 ation through turbulence. The proof relies on the asymptotic closeness of 
 statistical moments of the wavefield under the paraxial approximation\, it
 s white noise limit and the complex Gaussian distribution itself. I will d
 escribe two scaling regimes\, one is a kinetic scaling where the second mo
 ment is given by a transport equation and a second diffusive scaling\, whe
 re the second moment follows an anomalous diffusion. In both cases\, the l
 imiting complex Gaussian distribution is fully characterized by its first 
 and second moments. An additional stochastic continuity/tightness criterio
 n allows to show the convergence of these distributions over spaces of Hö
 lder-continuous functions.\n \n\nThis is joint work with Guillaume Bal.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/61/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael W. Mahoney (ICSI\, LBNL\, and Department of Statistics\, U
 C Berkeley)
DTSTART:20241211T223000Z
DTEND:20241211T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/62
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 62/">Foundational Methods for Foundation Models for Scientific Machine Lea
 rning</a>\nby Michael W. Mahoney (ICSI\, LBNL\, and Department of Statisti
 cs\, UC Berkeley) as part of SFU Mathematics of Computation\, Application 
 and Data ("MOCAD") Seminar\n\nLecture held in Big Data Hub ASB10900 and Hy
 brid.\n\nAbstract\nThe remarkable successes of ChatGPT in natural language
  processing (NLP) and related developments in computer vision (CV) motivat
 e the question of what foundation models would look like and what new adva
 nces they would enable\, when built on the rich\, diverse\, multimodal dat
 a that are available from large-scale experimental and simulational data i
 n scientific computing (SC)\, broadly defined.  Such models could provide 
 a robust and principled foundation for scientific machine learning (SciML)
 \, going well beyond simply using ML tools developed for internet and soci
 al media applications to help solve future scientific problems.  I will de
 scribe recent work demonstrating the potential of the "pre-train and fine-
 tune" paradigm\, widely-used in CV and NLP\, for SciML problems\, demonstr
 ating a clear path towards building SciML foundation models\; as well as r
 ecent work highlighting multiple "failure modes" that arise when trying to
  interface data-driven ML methodologies with domain-driven SC methodologie
 s\, demonstrating clear obstacles to traversing that path successfully.  I
  will also describe initial work on developing novel methods to address se
 veral of these challenges\, as well as their implementations at scale\, a 
 general solution to which will be needed to build robust and reliable SciM
 L models consisting of millions or billions or trillions of parameters.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/62/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Marta Ghirardelli (NTNU)
DTSTART:20250331T220000Z
DTEND:20250331T230000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/63
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 63/">Conditional Stability of the Euler Method on Riemannian Manifolds</a>
 \nby Marta Ghirardelli (NTNU) as part of SFU Mathematics of Computation\, 
 Application and Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hybrid
 .\n\nAbstract\nWe consider neural networks (NN) as discretizations of cont
 inuous dynamical systems. There are two relevant systems: the NN architect
 ure on one side and the gradient flow for optimizing the parameters on the
  other. In both cases\, stability properties of the discretization methods
  can be relevant e.g. for adversarial robustness. Moreover\, to prevent th
 e problem of exploding or vanishing gradients\, it is common to consider N
 Ns whose feature space and/or parameter space is a Riemannian manifold. We
  investigate the stability of the explicit Euler method defined on Riemann
 ian manifolds\, namely the Geodesic Explicit Euler (GEE). We provide a gen
 eral sufficient condition which ensures stability in any Riemannian manifo
 ld. Whenever the manifold has constant sectional curvature\, such conditio
 n can be turned into a rule for choosing the stepsize.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/63/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Brendan Pass (University of Alberta)
DTSTART:20250303T230000Z
DTEND:20250304T000000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/64
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 64/">An ODE characterization of regularized optimal transport and variants
  with linear constraints</a>\nby Brendan Pass (University of Alberta) as p
 art of SFU Mathematics of Computation\, Application and Data ("MOCAD") Sem
 inar\n\nLecture held in K9509 and Hybrid.\n\nAbstract\nI will discuss vari
 ous joint works with Luca Nenna and PhD student Joshua Hiew. We show that 
 entropically regularized optimal transport with discrete marginals and gen
 eral cost functions can be characterized by a well-posed ordinary differen
 tial equation.   The techniques adapt easily to a wide range of variants o
 f optimal transport\, with additional linear constraints\, including multi
 -marginal optimal transport and martingale optimal transport.  For all of 
 these problems\, the ODE can be solved by standard schemes\, yielding a ne
 w computational method.  This method has the advantage of simultaneously y
 ielding the solution for all values of the regularization parameter.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/64/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Robert John Baraldi (Sandia National Laboratories)
DTSTART:20250310T220000Z
DTEND:20250310T230000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/65
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 65/">A Nonsmooth Trust-Region Framework for Applications in Data Science a
 nd PDE Constrained Optimization</a>\nby Robert John Baraldi (Sandia Nation
 al Laboratories) as part of SFU Mathematics of Computation\, Application a
 nd Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hybrid.\n\nAbstract
 \nWe introduce an inexact trust-region method for efficiently solving a cl
 ass of problems in which the objective is the sum of a smooth\, nonconvex 
 function and nonsmooth\, convex function. Such objectives are pervasive in
  the literature\, with examples being machine learning\, basis pursuit\, i
 nverse problems\, and topology optimization. The inclusion of nonsmooth re
 gularizers and constraints is critical\, as they often preserve physical p
 roperties or promote sparsity in the control. \nEnforcing these properties
  in an efficient manner is critical when met with computationally intense 
 nature of solving PDEs or machine learning applications. We develop a nove
 l trust-region method to minimize the sum of a smooth nonconvex function a
 nd a nonsmooth convex function. Our method is unique in that it permits an
 d systematically controls the use of inexact objective function and deriva
 tive evaluations. When using a quadratic Taylor model for the trust-region
  subproblem\, our algorithm is an inexact\, matrix-free proximal Newton-ty
 pe method that permits indefinite Hessians. Moreover\, we provide extensio
 ns of this method to adaptive mesh refinement\, stochastic optimization as
  well as multilevel procedures.We prove global convergence of our method i
 n Hilbert space and demonstrate its efficacy on examples from data science
  and PDE-constrained optimization.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/65/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Astrid Herremans (KU Leuven)
DTSTART:20250317T220000Z
DTEND:20250317T230000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/66
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 66/">Function Approximation with Numerical Redundancy</a>\nby Astrid Herre
 mans (KU Leuven) as part of SFU Mathematics of Computation\, Application a
 nd Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hybrid.\n\nAbstract
 \nIn function approximation\, it is standard to assume the availability of
  an orthonormal basis for computations\, ensuring that numerical errors ar
 e negligible. However\, this assumption is often unmet in practice. For in
 stance\, multivariate approximation schemes might use basis functions defi
 ned on a tensor-product domain\, while the function to be approximated onl
 y exists on an irregular subdomain. When restricted to such a subdomain\, 
 the basis loses its orthogonality. This work discards the orthogonality as
 sumption\, enabling more flexible design of computational methods through 
 the use of non-orthogonal spanning set. To precisely identify when numeric
 al phenomena become significant\, we introduce the concept of numerical re
 dundancy. A set of functions is numerically redundant if it spans a lower-
 dimensional space when analysed numerically rather than analytically. This
  talk explores the key aspects of computing with such numerically redundan
 t spanning sets\, including convergence behaviour\, solver requirements\, 
 and data efficiency.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/66/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Elena Celledoni (NTNU)
DTSTART:20250407T220000Z
DTEND:20250407T230000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/67
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 67/">Shape analysis\, structure preservation and deep learning</a>\nby Ele
 na Celledoni (NTNU) as part of SFU Mathematics of Computation\, Applicatio
 n and Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hybrid.\n\nAbstr
 act\nShape analysis is a framework for treating complex data and obtain me
 trics on spaces of data. Examples are spaces of unparametrized curves\, ti
 me-signals\, surfaces and images. In this talk we discuss structure preser
 vation and deep learning for classifying\, analysing and manipulating shap
 es. \nA computationally demanding task for estimating distances between sh
 apes\, e.g. in object recognition\, is the computation of optimal reparame
 trizations. This is an optimisation problem on the infinite dimensional gr
 oup of orientation preserving diffeomorphisms.\nWe approximate diffeomorph
 isms with neural networks and use the optimal control and dynamical\nsyste
 ms point of view to deep learning. We will discuss useful geometric proper
 ties in this context e.g.\nreparametrization invariance of the distance fu
 nction and inherent geometric structure of the data.\nAnother interesting 
 set of related problems arises when learning dynamical systems from (human
 \nmotion) data.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/67/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Brynjulf Owren (NTNU)
DTSTART:20250416T190000Z
DTEND:20250416T200000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/68
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 68/">A dynamical systems approach for designing stable neural networks on 
 Euclidean spaces and Riemannian manifolds.</a>\nby Brynjulf Owren (NTNU) a
 s part of SFU Mathematics of Computation\, Application and Data ("MOCAD") 
 Seminar\n\nLecture held in K9509 and Hybrid.\n\nAbstract\nRecently\, Sherr
 y et al. (2024) reconsidered the pioneering work of Dahlquist and Jeltsch 
 (1979) on circle-contractivity for the study of neural networks. This theo
 ry can be used to analyse and improve the robustness of architectures that
  are devised by a dynamical systems approach.\nThe main idea is to start w
 ith a continuous dynamical system which satisfies a certain monotonicity c
 ondition. Then we need to discretize the system in a way that preserves th
 e non-expansive behavior of the associated flow. The theory is old\, but n
 ot necessarily widely known because Dahlquist and Jeltsch only published t
 he results in the form of a preprint. The application to neural networks i
 s new as far as we know\, and we shall present some results and examples f
 rom Sherry et al (2024).\nThe importance of neural networks set on Riemann
 ian manifolds seems to be increasing and there is a need to develop the th
 eory of non-expansive numerical methods also in such a setting.\nWe presen
 t some ideas from Arnold et al. (2024) where a few simple numerical method
 s for Riemannian manifolds are studied. We consider whether these methods 
 can be non-expansive when applied to non-expansive vector fields. For the 
 geodesic implicit Euler method\, which also feature in the proximal gradie
 nt method for optimisation\, we find that its behaviour is strongly depend
 ent on the sectional curvature of the manifold. As opposed to the Euclidea
 n case\, we now also have to be careful about whether the nonlinear equati
 ons to be solved in each time step has a unique solution or not.\n\nArnold
 \, Celledoni\, Çokaj\, Owren\, Tumiotto: B-stability of numerical integra
 tors on Riemannian manifolds. Journal of Computational Dynamics\, 2024\, 1
 1(1): 92-107. doi: 10.3934/jcd.2024002\n\nDahlquist and Jeltsch: Generaliz
 ed disks of contractivity for explicit and implicit Runge-Kutta methods.\n
 Dept. of Numerical Analysis and Computer Science\, The Royal Institute of 
 Technology\, Stockholm\, Report TRITA-NA-7906}\, 1979.\n\nSherry\, Celledo
 ni\, Ehrhardt\, Murari\, Owren\, Schönlieb: Designing Stable Neural Netwo
 rks using Convex Analysis and ODEs\, Physica D: Nonlinear Phenomena\, (463
 ) 2024\, Paper No. 134159\, 13 pp.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/68/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jay Gopalakrishnan (Portland State University)
DTSTART:20250411T220000Z
DTEND:20250411T230000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/69
DESCRIPTION:by Jay Gopalakrishnan (Portland State University) as part of S
 FU Mathematics of Computation\, Application and Data ("MOCAD") Seminar\n\n
 Lecture held in SFU West Mall 2830.\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/AppliedMath/69/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Palmer (Harvard University)
DTSTART:20250522T220000Z
DTEND:20250522T230000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/70
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 70/">From Geometry Processing to Topological Defects and Beyond</a>\nby Da
 vid Palmer (Harvard University) as part of SFU Mathematics of Computation\
 , Application and Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hybr
 id.\n\nAbstract\nPractical problems from computer graphics\, computer visi
 on\, and computational engineering reveal surprising connections to the ph
 ysics of crystals\, knot theory\, minimal surfaces\, and algebraic geometr
 y. Borrowing tools from math and physics helps us devise more robust and e
 fficient algorithms\, and conversely\, computational exploration with thes
 e tools can provide mathematical insight and elucidate new theoretical que
 stions.\n\nIn optimization over surfaces\, local methods can get stuck whe
 n the incorrect topology is chosen at initialization. Current relaxation\,
  an idea borrowed from the analysis of minimal surfaces\, provides an alte
 rnative convex language for surface optimization that avoids these barrier
 s. This idea inspires our new representation\, DeepCurrents\, for learning
  families of surfaces with boundary.\n\nNext we turn to computational mesh
 ing\, an essential geometric prerequisite to many techniques for simulatin
 g continuous physical systems. Surprisingly\, meshing itself involves stru
 ctures analogous to topological defects found in physics\, and these defec
 ts are at the heart of what makes meshing problems challenging. Through ex
 ploring the geometry of defects\, we devise two different approaches to su
 rmounting these barriers\, based on current relaxation and semidefinite re
 laxation\, respectively.\n\nThese examples serve as a microcosm of how thi
 nking carefully about the geometry and topology of optimization landscapes
  can unlock more robust and reliable algorithms\, suggesting a path forwar
 d in interdisciplinary applied geometry.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/70/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Kthim Imeri
DTSTART:20250530T220000Z
DTEND:20250530T230000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/71
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 71/">Spectral Solutions to Robin Problems using Steklov Eigenfunctions and
  their Relations with the Smoothness of Domains</a>\nby Kthim Imeri as par
 t of SFU Mathematics of Computation\, Application and Data ("MOCAD") Semin
 ar\n\nLecture held in K9509 and Hybrid.\n\nAbstract\nThe Laplace operator 
 with Dirichlet or Robin boundary conditions can be solved via a spectral s
 eries of Steklov eigenfunctions\, which converges exponentially fast for s
 mooth domains and data. The rate at which the Steklov eigenfunctions thems
 elves can be approximated numerically depends critically on the boundary
 ’s regularity.\n\nKey idea: Reformulate the boundary-value problem so th
 at the solution is recovered from a rapidly converging series of Steklov m
 odes.\n\nTheoretical Insights: On smoothly shaped domains (with smooth bou
 ndary data)\, the series converges exponentially\, requiring very few term
 s for high accuracy. Even for irregular domains or rough data\, the method
  retains algebraic (polynomial-rate) convergence.\n\nNumerical Implementat
 ion: We present three complementary schemes for computing Steklov eigenfun
 ctions and assembling the spectral expansion.\n\\end{list}\n
LOCATION:https://researchseminars.org/talk/AppliedMath/71/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Brendan Keith (Brown University)
DTSTART:20251009T203000Z
DTEND:20251009T213000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/72
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 72/">Proximal Galerkin: A Unified Framework for Variational Problems with 
 Inequality Constraints</a>\nby Brendan Keith (Brown University) as part of
  SFU Mathematics of Computation\, Application and Data ("MOCAD") Seminar\n
 \nLecture held in K9509 and Hybrid.\n\nAbstract\nThis talk presents the Pr
 oximal Galerkin (PG) method\, a high-order numerical method for solving va
 riational problems with inequality constraints. PG combines two foundation
 al ideas from applied mathematics: Galerkin discretizations of partial dif
 ferential equations and Bregman proximal point algorithms for nonsmooth or
  constrained optimization. Each iteration of the method solves a regulariz
 ed subproblem formulated as a nonlinear saddle-point system. Conceptually\
 , PG is a discretized gradient flow within a finite-dimensional function s
 pace\, such as a finite element subspace\, yielding robust and convergent 
 solution approximations. The unified framework systematically handles a br
 oad class of variational inequalities\, enabling high-order\, constraint-p
 reserving solutions without the need for specialized basis functions. This
  talk will outline the theoretical foundations of PG\, highlight its conne
 ctions to convex analysis\, and showcase recent applications in contact me
 chanics\, fracture\, and multi-phase flows\, among others.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/72/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alexandre Girouard (Université Laval)
DTSTART:20250912T223000Z
DTEND:20250912T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/73
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 73/">The exterior Steklov problem for Euclidean domains</a>\nby Alexandre 
 Girouard (Université Laval) as part of SFU Mathematics of Computation\, A
 pplication and Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hybrid.
 \n\nAbstract\nWe investigate the Steklov eigenvalue problem in the exterio
 r of a bounded Euclidean domain. In particular\, we prove the equivalence 
 of several formulations of this problem previously\nproposed in the litera
 ture. We derive geometric eigenvalue inequalities and examine other proper
 ties of the exterior Steklov eigenvalues and eigenfunctions. Our results r
 eveal that while there\nare many similarities between the exterior and the
  interior Steklov problems\, certain spectral phenomena differ significant
 ly. We also emphasise the distinctions between the properties of the\nexte
 rior Steklov problem in two dimensions and in higher dimensions.\n\nJoint 
 work with Lukas Bundrock\, Denis Grebenkov\, Michael Levitin and Iosif Pol
 terovich.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/73/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nathan Kutz (University of Washington)
DTSTART:20251114T223000Z
DTEND:20251114T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/74
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 74/">Modern Sensing and Physics Discovery with Machine Learning</a>\nby Na
 than Kutz (University of Washington) as part of SFU Mathematics of Computa
 tion\, Application and Data ("MOCAD") Seminar\n\nLecture held in AQ3149.\n
 \nAbstract\nSensing is a universal task in science and engineering. Downst
 ream tasks from sensing include learning dynamical models\, inferring full
  state estimates of a system (system identification)\, control decisions\,
  and forecasting. These tasks are exceptionally challenging to achieve wit
 h limited sensors\, noisy measurements\, and corrupt or missing data. Exis
 ting techniques typically use current (static) sensor measurements to perf
 orm such tasks and require principled sensor placement or an abundance of 
 randomly placed sensors. In contrast\, we propose a SHallow REcurrent Deco
 der (SHRED) neural network structure which incorporates (i) a recurrent ne
 ural network (LSTM) to learn a latent representation of the temporal dynam
 ics of the sensors\, and (ii) a shallow decoder that learns a mapping betw
 een this latent representation and the high-dimensional state space. By ex
 plicitly accounting for the time-history\, or trajectory\, of the sensor m
 easurements\, SHRED enables accurate reconstructions with far fewer sensor
 s\, outperforms existing techniques when more measurements are available\,
  and is agnostic towards sensor placement. In addition\, a compressed repr
 esentation of the high-dimensional state is directly obtained from sensor 
 measurements\, which provides an on-the-fly compression for modeling physi
 cal and engineering systems. Forecasting is also achieved from the sensor 
 time-series data alone\, producing an efficient paradigm for predicting te
 mporal evolution with an exceptionally limited number of sensors. In the e
 xample cases explored\, including turbulent flows\, complex spatio-tempora
 l dynamics can be characterized with exceedingly limited sensors that can 
 be randomly placed with minimal loss of performance.\n\nThis event is a jo
 int Physics Colloquium and MOCAD seminar.  Please note 2:30 p.m. start tim
 e.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/74/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sheehan Olver (Imperial College London)
DTSTART:20250922T223000Z
DTEND:20250922T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/75
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 75/">Numerical Analysis Meets Representation Theory</a>\nby Sheehan Olver 
 (Imperial College London) as part of SFU Mathematics of Computation\, Appl
 ication and Data ("MOCAD") Seminar\n\nLecture held in K9509 and Hybrid.\n\
 nAbstract\nIn this talk we see how representation theory can be used in nu
 merical methods for partial differential equations (PDEs) and how numerics
  can give more efficient methods for computational problems in representat
 ion theory. In particular\, we will see that representation theory tells u
 s the ways symmetry can present itself\, and building that information int
 o discretisations of PDEs leads to trivial parallelisation. We will also s
 ee that numerical linear algebra can be used to construct a polynomial tim
 e algorithm for decomposing representations of the symmetric group. Finall
 y\, we discuss potential application of the ideas to Schrödinger equation
  with multiple particles.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/75/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Charles Cheung (NVIDIA)
DTSTART:20250908T223000Z
DTEND:20250908T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/77
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 77/">From PhysicsML to Physical AI</a>\nby Charles Cheung (NVIDIA) as part
  of SFU Mathematics of Computation\, Application and Data ("MOCAD") Semina
 r\n\nLecture held in K9509.\n\nAbstract\nMachine learning is transforming 
 the way we approach the laws of nature. PhysicsML — the fusion of machin
 e learning with physics-based modeling — is rapidly advancing fields fro
 m computational biology and climate forecasting to product design and high
 -fidelity CFD simulations. These breakthroughs are pushing the boundaries 
 of what we can model\, predict\, and design. But where do we go from here?
 \n\nIn this talk\, we explore the emerging frontier of Physical AI: intell
 igent systems that understand and interact with the physical world through
  the combined power of machine learning\, physics-based models\, and physi
 cally accurate simulations. We will share NVIDIA’s vision for enabling t
 his future—where PhysicsML serves as the engine\, simulation platforms p
 rovide realistic virtual worlds\, and the results drive the next generatio
 n of robotics\, autonomous vehicles\, and beyond.\nThe era of machines tha
 t not only think but also reason about the physical world has begun. Let
 ’s see what comes next.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/77/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jethro Warnett (Oxford)
DTSTART:20251017T223000Z
DTEND:20251017T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/78
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 78/">CANCELLED - Well-posedness and mean-field limit estimate of a consens
 us-based algorithm for multiplayer games</a>\nby Jethro Warnett (Oxford) a
 s part of SFU Mathematics of Computation\, Application and Data ("MOCAD") 
 Seminar\n\nLecture held in K9509 and Hybrid.\n\nAbstract\nRecently\, a der
 ivative-free consensus-based particle method was introduced that finds the
  Nash equilibrium of non-convex multiplayer games\, where it proves the gl
 obal exponential convergence in the sense of mean-field law. We provide a 
 quantitative estimate of the mean-field limit with respect to the number o
 f particles\, as well as establishing the well-posedness of both the finit
 e particle model and the corresponding mean-field dynamics.\n\nDue to a me
 dical emergency\, today's 3:30 p.m MOCAD Seminar is cancelled.  We are att
 empting to reschedule (and it seems another opportunity might be the same 
 time\, 3:30 p.m.\, Monday afternoon).\n
LOCATION:https://researchseminars.org/talk/AppliedMath/78/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Cliff Stoll (Acme Klein Bottles)
DTSTART:20251017T163000Z
DTEND:20251017T173000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/79
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 79/">Glass in Math\; Math in Glass</a>\nby Cliff Stoll (Acme Klein Bottles
 ) as part of SFU Mathematics of Computation\, Application and Data ("MOCAD
 ") Seminar\n\nLecture held in ASB10900 and Hybrid.\n\nAbstract\nGlass Klei
 n bottles?  Sure!  How about knots and knot-compliments? A Boys Surface?  
 Plenty of topological manifolds work well in glass.  \n\nWith good fortune
 \, SFU's glass-blower\, Lucas Clarke\, will demonstrate his art in making 
 mathematical manifolds in glass.\n\nC'mon over for hot math and hot glass!
 \n\nThis expository MOCAD seminar is a special wide-audience seminar held 
 in ASB10900 (the Big Data Hub lecture theatre).\n
LOCATION:https://researchseminars.org/talk/AppliedMath/79/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrew Warren (University of British Columbia)
DTSTART:20251121T233000Z
DTEND:20251122T003000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/80
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 80/">Unsupervised learning of 1d branching structures</a>\nby Andrew Warre
 n (University of British Columbia) as part of SFU Mathematics of Computati
 on\, Application and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nA
 bstract\nSuppose we have unlabeled data where we believe there is an unkno
 wn\, latent branching (or tree-like) structure. Can we infer that structur
 e? This type of unsupervised learning problem arises in a wide range of bi
 ological applications\, including in evolutionary and developmental settin
 gs. \n\nIn this talk\, I will present a variational approach to this probl
 em\, whereby the latent branching structure can be estimated by way of a d
 iscretization of the "average-distance problem" of Buttazzo\, Oudet\, and 
 Stepanov. The resulting estimator is shown to be consistent in the zero-no
 ise limit\, and can be cheaply approximated numerically by a Lloyd- or EM-
 type algorithm. This work is joint with Anton Afanassiev\, Forest Kobayash
 i\, and Geoff Schiebinger.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/80/
END:VEVENT
BEGIN:VEVENT
SUMMARY:James Rowbottom (Cambridge)
DTSTART:20251024T223000Z
DTEND:20251024T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/81
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 81/">Physics inspired GNNs and some applications in scientific computing</
 a>\nby James Rowbottom (Cambridge) as part of SFU Mathematics of Computati
 on\, Application and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nA
 bstract\nIn this talk I will present a series of works derived from the fr
 amework of physics inspired graph neural networks (GNN). The central premi
 se is a GNN can be seen as the discretisation of a learnable dynamical sys
 tem over a graph\, this allows to leverage the standard tools of numerical
  analysis to design and optimise in this model space. Firstly\, I will dem
 onstrate how this provides desirable architectural properties which lead t
 o SOTA performance in common GNN node classification tasks. In the latter 
 part of the talk\, I will show how the same architectures emerge as natura
 l candidates in a range of applications found in scientific computing incl
 uding adaptive mesh refinement for finite element methods and mesh based g
 raph inverse problems.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/81/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Deanna Needell (UCLA)
DTSTART:20251128T233000Z
DTEND:20251129T003000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/82
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 82/">Fairness\, theory\, and sampling paradigms in machine learning</a>\nb
 y Deanna Needell (UCLA) as part of SFU Mathematics of Computation\, Applic
 ation and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nIn
  this talk\, we will discuss several areas of recent work centered around 
 the themes of fairness and foundations in machine learning as well as high
 light the challenges in this area. We will discuss recent results involvin
 g linear algebraic tools for learning\, such as methods in non-negative ma
 trix factorization that include tailored approaches for fairness. Then\, w
 e will discuss new foundational results that theoretically justify phenome
 na like benign overfitting in neural networks.  Lastly\, we will mention s
 ome recent results on observational multiplicity\, and how those can be ut
 ilized to improve equity. Throughout the talk\, we will include example ap
 plications from collaborations with community partners\, using machine lea
 rning to help organizations with fairness and justice goals. This talk inc
 ludes work joint with Erin George\, Kedar Karhadkar\, Lara Kassab\, and Gu
 ido Montufar.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/82/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Simone Brugiapaglia (Concordia University)
DTSTART:20260410T223000Z
DTEND:20260410T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/83
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 83/">From compression to depth: generative compressive sensing and deep gr
 eedy unfolding for signal reconstruction</a>\nby Simone Brugiapaglia (Conc
 ordia University) as part of SFU Mathematics of Computation\, Application 
 and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nSince it
 s inception in the early 2000s\, compressive sensing has become a well-est
 ablished paradigm for efficient signal recovery\, with applications rangin
 g from medical imaging to scientific computing. More recently\, data-drive
 n reconstruction methods based on deep neural networks have attracted cons
 iderable attention and shown great promise as an alternative approach. In 
 this talk\, we will review recent progress in signal reconstruction techni
 ques that combine principles from compressive sensing and deep learning. F
 irst\, we will discuss recent advances in generative compressive sensing\,
  where the traditional sparsity prior is replaced by the assumption that t
 he signal to be reconstructed lies in the range of a deep generative neura
 l network. Second\, we will explore deep greedy unfolding\, which involves
  designing deep neural network architectures by "unrolling" the iterations
  of a sparse recovery algorithm onto the layers of a trainable neural netw
 ork. In both cases\, we will present numerical results in tandem with theo
 retical guarantees.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/83/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ricardo Baptista (University of Toronto)
DTSTART:20260417T223000Z
DTEND:20260417T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/84
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 84/">Processing Language\, Images and Other Data Modalities</a>\nby Ricard
 o Baptista (University of Toronto) as part of SFU Mathematics of Computati
 on\, Application and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nA
 bstract\nA fundamental problem in artificial intelligence is how to simult
 aneously deploy data from different sources\, such as audio\, images\, tex
 t\, and video\, collectively known as multimodal data. In this talk\, I wi
 ll present a mathematical framework for studying this question\, focusing 
 primarily on text and images. I will begin by describing how large languag
 e models (LLMs) operate\, addressing the challenging issue of using real-n
 umber algorithms to process language. In particular\, I will explain next-
 token prediction\, the core of current LLM methodology. I will then focus 
 on the canonical problem of measuring alignment between image and text dat
 a (contrastive learning). Finally\, I will describe how images can be gene
 rated from text prompts (conditional generative modeling). From a mathemat
 ical perspective\, a unifying theme underlying this work is the minimizati
 on of divergences defined on spaces of probability measures. A second key 
 mathematical idea is the attention mechanism—a form of nonlinear correla
 tion between vector-valued sequences. I aim to explain these concepts and 
 their relevance to modern machine learning algorithms in an accessible fas
 hion for a broad audience from the mathematical and computational sciences
 .\n
LOCATION:https://researchseminars.org/talk/AppliedMath/84/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Harish S. Bhat (UC Merced)
DTSTART:20260320T223000Z
DTEND:20260320T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/85
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 85/">Learning and Control Problems for Electron Dynamics</a>\nby Harish S.
  Bhat (UC Merced) as part of SFU Mathematics of Computation\, Application 
 and Data ("MOCAD") Seminar\n\nLecture held in K9509.\n\nAbstract\nTo compu
 te the quantum dynamics of a molecule's electrons\, one tractable way to p
 roceed is via time-dependent density functional theory (TDDFT). TDDFT give
 s equations of motion that\, in principle\, yield the same electron densit
 y as the full but intractable time-dependent Schrodinger equation. However
 \, there is one term in the TDDFT Hamiltonian whose functional form is unk
 nown: the exchange-correlation potential (Vxc). This motivates the idea of
  trying to learn Vxc (or\, at least\, an improved model of Vxc) from data.
  I will review progress on this problem that includes (i) generation of su
 itable training data\, (ii) direct learning of Vxc neural network models i
 n one spatial dimension\, and (iii) PDE-constrained optimization technique
 s to learn Vxc in two spatial dimensions. A key ingredient in (ii) and (ii
 i) will be the adjoint method\, which connects our work to quantum optimal
  control. We will conclude by briefly describing how to use the adjoint me
 thod (together with small neural networks) to solve quantum optimal contro
 l problems for molecules driven by electric fields.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/85/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Stefania Fresca (University of Washington)
DTSTART:20260206T233000Z
DTEND:20260207T003000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/86
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 86/">Handling geometric variability and multi-scale optimization in surrog
 ate models</a>\nby Stefania Fresca (University of Washington) as part of S
 FU Mathematics of Computation\, Application and Data ("MOCAD") Seminar\n\n
 Lecture held in K9509.\n\nAbstract\nSolving differential problems using fu
 ll order models (FOMs)\, such as the finite element method\, can results i
 n prohibitive computational costs\, particularly in real-time simulations 
 and multi-query routines. Surrogate modeling aims to replace FOMs with mod
 els characterized by much lower complexity but still able to express the p
 hysical features of the system under investigation.\n\nIn many application
 s\, the available data are inherently multi-resolution\, either due to geo
 metric variability\, where solutions are defined on parametrized domains\,
  or due to the need to capture phenomena across different spatial scales. 
 Motivated by this observation\, two complementary approaches to surrogate 
 modeling for parametrized PDEs are introduced and analyzed.\n\nFirst\, Con
 tinuous Geometry-Aware DL-ROMs (CGA-DL-ROMs) are introduced. The space-con
 tinuous formulation of the proposed architecture enables to deal with mult
 i-resolution datasets\, which commonly arise in the presence of geometrica
 l parametrizations. Furthermore\, CGA-DL-ROMs are endowed with a strong in
 ductive bias that explicitly accounts for geometric parameters\, allowing 
 the distinct impact of geometric variability on the solution manifold to b
 e captured. This geometrical awareness leads to improved compression prope
 rties and enhanced overall performance of the surrogate model.\n\nSecond\,
  a Multi-Level Monte Carlo (MLMC) training strategy for operator learning 
 is proposed\, exploiting hierarchies of resolutions of function dicretizat
 ions. The approach combines inexpensive gradient estimates obtained from c
 oarse-resolution data with corrective contributions from a limited number 
 of fine-resolution samples\, thereby reducing the overall training cost wh
 ile preserving accuracy. The MLMC training framework is architecture-agnos
 tic and applicable to any architecture capable of handling multi-resolutio
 n data. Numerical experiments highlight the existence of a Pareto trade-of
 f between accuracy and computational cost governed by the distribution of 
 samples across resolution levels.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/86/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jordan Sawchuk (SFU)
DTSTART:20260313T223000Z
DTEND:20260313T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/87
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 87/">A (nearly) random walk through thermodynamic geometry: Friction\, opt
 imal transport\, and curvature</a>\nby Jordan Sawchuk (SFU) as part of SFU
  Mathematics of Computation\, Application and Data ("MOCAD") Seminar\n\nLe
 cture held in K9509.\n\nAbstract\nMinimizing energy dissipation in driven 
 stochastic systems is a fundamental goal in nonequilibrium thermodynamics.
  In the linear-response (slow driving) regime\, this becomes a problem of 
 Riemannian geometry: The control space is equipped with a metric (the "gen
 eralized friction tensor") and optimal protocols are geodesics. This talk 
 follows one physicist's (nearly) random walk through the mathematical land
 scape in an effort to understand this thermodynamic geometry. \n\nI will d
 emonstrate that the generalized friction tensor is deeply connected to the
  network topology of the controlled system\, revealing unexpected links to
  previously established graph-theoretic geometries. Treating the friction 
 tensor as a metric on the probability simplex\, I show that the metric ten
 sor is directly related to the mean first-passage times between states\, a
 nd that dissipation is equivalently seen as a discrete $L^2$-Wasserstein t
 ransport cost or as Joule heating in a resistor network. \n\nFinally\, I w
 ill share recent results\, open questions\, and grand ambitions regarding 
 an extrinsic geometry of control. I will discuss how the "cost of constrai
 nt" can be framed using the second fundamental form and normal curvature\,
  how graph automorphisms map onto manifold isometries\, and highlight how 
 geometric stability analysis (via Jacobi fields) can be used to predict wh
 en symmetry-breaking protocols become energetically optimal.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/87/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Olivier Lafitte (Université Sorbonne Paris Nord)
DTSTART:20260427T223000Z
DTEND:20260427T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/88
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 88/">Resonances in a cold plasma</a>\nby Olivier Lafitte (Université Sorb
 onne Paris Nord) as part of SFU Mathematics of Computation\, Application a
 nd Data ("MOCAD") Seminar\n\nInteractive livestream: https://sfu.zoom.us/j
 /88232824688?pwd=SSwf2Nk28PAmzRguQcYrdLYaXKHml9.1\nLecture held in K9509.\
 n\nAbstract\nWe consider a  magnetized plasma (in the case of a tokamak) w
 here the density of ions $ n_0$ as well as the imposed vertical magnetic f
 ield $B_0$ dépend on the horizontal variable $x$. The linearized system o
 f Euler-Maxwell équations (system of 10 PDEs of order 1) around the solut
 ion $(E\,B\, v\,n)_0=(0\, B_0(x)\,0\, n_0(x))$ is characterized by the two
  fréquencies denoted by $\\omega_p(x)$ and $\\omega_c(x)$ (respectively p
 lasma and cyclotron frequencies). Classically\, the cyclotron frequency is
  associated with Landau damping\, but this frequency does not appear to be
  a resonance in the cold plasma model (ax we prove it) but another frequen
 ce of interest\, called the hybrid frequency $\\omega_h(x)=\\sqrt{\\omega_
 p^2(x)+\\omega_c^2(x)}$ is a resonance for the system: at any point $x_h$ 
 where $\\omega$\, the imposed oscillation frequency is equal to $\\omega_h
 (x)$ we have energy transfer (from the electrons to the electric field). W
 e prove it using Bessel functions for the study of the corresponding linea
 r system of ODEs near $x_h$.\nJoint work with Bruno Despres (Sorbonne Univ
 ersité) and Lise-Marie Imbert-Gerard (University of Arizona).\n\nJoint wo
 rk with Bruno Despres (Sorbonne Université) and Lise-Marie Imbert-Gerard 
 (University of Arizona).\n
LOCATION:https://researchseminars.org/talk/AppliedMath/88/
URL:https://sfu.zoom.us/j/88232824688?pwd=SSwf2Nk28PAmzRguQcYrdLYaXKHml9.1
END:VEVENT
BEGIN:VEVENT
SUMMARY:Laura Weidensager (Simon Fraser University)
DTSTART:20260327T223000Z
DTEND:20260327T233000Z
DTSTAMP:20260422T212528Z
UID:AppliedMath/89
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/AppliedMath/
 89/">Fast high-dimensional approximation: ANOVA methods for wavelets and r
 andom Fourier features</a>\nby Laura Weidensager (Simon Fraser University)
  as part of SFU Mathematics of Computation\, Application and Data ("MOCAD"
 ) Seminar\n\nLecture held in K9509.\n\nAbstract\nIn this talk\, we focus o
 n the problem of reconstructing a multivariate function from discrete d-di
 mensional samples. Beyond achieving accurate function recovery\, we aim to
  enhance interpretability by identifying how individual variables and thei
 r interactions influence the target function. To this end\, we develop sev
 eral efficient hybrid methods that combine the ANOVA decomposition\, wavel
 et techniques\, and random Fourier features. The multi-resolution capabili
 ties of wavelets and the scalability of random Fourier features\, paired w
 ith the interpretability provided by the ANOVA decomposition\, enable a ro
 bust framework for high-dimensional function approximation. The approaches
  in this talk address both\, computational efficiency and transparency.\n\
 nThe total approximation error is influenced by three main components. Fir
 st\, the ANOVA truncation to a function of low effective dimension is the 
 basis for the construction of ANOVA-boosting algorithms\, which exploit th
 e structure of the function. Second\, the projection onto a finite-dimensi
 onal subspace is determined by the choice of basis functions. To analyze t
 he projection error\, we explore and discuss wavelet characterizations of 
 functions in certain function spaces\, like Sobolev and Besov spasces. Fin
 ally\, for the regression from samples\, we give error bounds for the leas
 t squares approximation\, which asymptotically coincides with the behavior
  of the projection error.\n
LOCATION:https://researchseminars.org/talk/AppliedMath/89/
END:VEVENT
END:VCALENDAR
