BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Nicholas J. Higham (University of Manchester\, UK)
DTSTART:20200429T140000Z
DTEND:20200429T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/2
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/2/">Ar
 e Numerical Linear Algebra Algorithms Accurate at Extreme Scale and at Low
  Precisions?</a>\nby Nicholas J. Higham (University of Manchester\, UK) as
  part of E-NLA - Online seminar series on numerical linear algebra\n\n\nAb
 stract\nThe advent of exascale computing will bring the capability to solv
 e dense linear systems of order $10^8$. At the same time\, computer hardwa
 re is increasingly supporting low precision floating-point arithmetics\, s
 uch as the IEEE half precision and bfloat16 arithmetics.  The standard rou
 nding error bound for the inner product of two $n$-vectors $x$ and $y$ is 
 $|fl(x^Ty) - x^Ty| \\le n u |x|^T|y|$\,   where $u$ is the unit roundoff\,
  and the bound is approximately attainable.  This bound provides useful in
 formation only if $nu < 1$.  Yet $nu > 1$ for exascale-size problems solve
 d in single precision and also for problems of order $n > 2048$ solved in 
 half precision. Standard error bounds for matrix multiplication\, LU facto
 rization\, and so on\, are equally uninformative in these situations. Yet 
 the supercomputers in the TOP500 are there by virtue of having successfull
 y solved linear systems of orders up to $10^7$\, and deep learning impleme
 ntations routinely use half precision with apparent success.\n\nHave we re
 ached the point where our techniques for analyzing rounding errors\, honed
  over 70 years of digital computation\,  are unable to predict the accurac
 y of numerical linear algebra computations that are now routine? I will sh
 ow that the answer is "no": we can understand the behaviour of extreme-sca
 le and low accuracy computations. The explanation lies in algorithmic desi
 gn techniques (both new and old) that help to reduce error growth along wi
 th a new probabilistic approach to rounding error analysis.\n
LOCATION:https://researchseminars.org/talk/E-NLA/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michele Benzi (Scuola Normale Superiore Pisa\, Italy)
DTSTART:20200506T140000Z
DTEND:20200506T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/3
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/3/">No
 nlocal dynamics on networks via fractional graph Laplacians: theory and nu
 merical methods</a>\nby Michele Benzi (Scuola Normale Superiore Pisa\, Ita
 ly) as part of E-NLA - Online seminar series on numerical linear algebra\n
 \n\nAbstract\nNonlocal diffusive dynamics on large\, sparse networks can b
 e modeled by means of systems of differential equations involving fraction
 al graph Laplacians. The solution of such systems leads to non-analytic ma
 trix functions\, due to the singularity of the graph Laplacian. Off-diagon
 al decay estimates for these and related matrix functions will be presente
 d\, together with explicit (closed form) expressions for some simple but i
 mportant examples. The case of directed networks (leading to nonsymmetric 
 Laplacians) will also be discussed.\n\nThe numerical approximation of the 
 dynamics can be implemented by means of Krylov subspace methods. The lack 
 of smoothness of the underlying function suggests the use of rational appr
 oximation techniques. Some results using a shift-and-invert approach will 
 be presented.\n\nApplications include the efficient exploration of large s
 patial networks and consensus dynamics in multi-agent systems.\n\nThis is 
 joint work with Daniele Bertaccini (U. of Rome `Tor Vergata’)\, Fabio Du
 rastante (IAC-CNR)\, and Igor Simunec (Scuola Normale Superiore).\n
LOCATION:https://researchseminars.org/talk/E-NLA/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Volker Mehrmann (Technische Universität Berlin\, Germany)
DTSTART:20200513T140000Z
DTEND:20200513T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/4
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/4/">Ro
 bustness of linear algebra properties for Port-Hamiltonian systems</a>\nby
  Volker Mehrmann (Technische Universität Berlin\, Germany) as part of E-N
 LA - Online seminar series on numerical linear algebra\n\n\nAbstract\nPort
 -Hamiltonian systems are an important class of control systems that arise 
 in all areas of science and engineering. When the system is linearized aro
 und a stationary solution one gets a linear port-Hamiltonian system. Despi
 te the fact that the system looks unstructured at first sight\, it has rem
 arkable properties.  Stability and passivity are automatic\, spectral stru
 ctures for purely imaginary eigenvalues\, eigenvalues at infinity\, and ev
 en singular blocks in the Kronecker canonical form are very restricted and
  furthermore the structure leads to fast and efficient iterative solution 
 methods for associated linear systems. When port-Hamiltonian systems are s
 ubject to (structured) perturbations\, then it is important to determine t
 he minimal allowed perturbations so that these properties are not preserve
 d. The computation of these structured distances to instability\, non-pass
 ivity\, or non-regularity\, is typically a very hard optimization problem.
  However\, in the context of port-Hamiltonian systems\, the computation be
 comes much easier and can even be implemented efficiently for large scale 
 problems in combination with model reduction techniques. We will discuss t
 hese distances and the computational methods and illustrate the results vi
 a an industrial problem in the context of noise reduction for disk brakes.
 \n
LOCATION:https://researchseminars.org/talk/E-NLA/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ilse Ipsen (North Carolina State University\, USA)
DTSTART:20200520T140000Z
DTEND:20200520T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/5
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/5/">Pr
 obabilistic numerical linear solvers</a>\nby Ilse Ipsen (North Carolina St
 ate University\, USA) as part of E-NLA - Online seminar series on numerica
 l linear algebra\n\n\nAbstract\nWe formulate iterative methods for the sol
 ution of nonsingular linear systems as statistical inference processes by 
 modeling the epistemic uncertainty in the iterates due to a limited comput
 ational budget. The goal is to obtain well-calibrated uncertainty  that is
  more insightful than traditional worst-case bounds\, and to produce a  pr
 obabilistic description of the error that can be propagated coherently thr
 ough a computational pipeline.\n\nOur Bayesian Conjugate Gradient Method (
 BayesCG) for real symmetric positive-definite linear systems posits a prio
 r distribution for the solution\, and conditions on the finite amount of i
 nformation obtained during the iterations to  produce a posterior distribu
 tion that reflects the reduced uncertainty.  The following topics will be 
 addressed:  (i) choice of prior for fast convergence and well-calibrated u
 ncertainty\; (ii) error estimation through test statistics that mitigate t
 he effect of BayesCG's nonlinear dependence on the solution\; and (iii) nu
 merical stability to maintain positive semi-definiteness of the posteriors
 \, and prevent convergence slow down from loss of orthogonality in residua
 ls and search directions.\n\nThis is joint work with Jon Cockayne (http://
 www.joncockayne.com/)\, Chris J. Oates (http://oates.work/)\, and Timothy 
 W. Reid (https://math.sciences.ncsu.edu/people/twreid/).\n
LOCATION:https://researchseminars.org/talk/E-NLA/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Cleve Moler (MathWorks\, Inc.)
DTSTART:20200527T140000Z
DTEND:20200527T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/6
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/6/">Th
 e Evolution of "The Evolution of MATLAB"</a>\nby Cleve Moler (MathWorks\, 
 Inc.) as part of E-NLA - Online seminar series on numerical linear algebra
 \n\n\nAbstract\nWe show how MATLAB has evolved over more than 40 years fro
 m a simple matrix calculator to a powerful technical computing environment
 . We demonstrate several examples of MATLAB applications.  We conclude wit
 h a discussion of current developments\, including machine learning\, auto
 mated driving and parallel computation.\n
LOCATION:https://researchseminars.org/talk/E-NLA/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nick Trefethen (University of Oxford)
DTSTART:20200603T140000Z
DTEND:20200603T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/7
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/7/">Va
 ndermonde with Arnoldi</a>\nby Nick Trefethen (University of Oxford) as pa
 rt of E-NLA - Online seminar series on numerical linear algebra\n\n\nAbstr
 act\nVandermonde matrices are exponentially ill-conditioned\, rendering th
 e familiar “polyval(polyfit)” algorithm for polynomial interpolation a
 nd least-squares fitting ineffective at higher degrees. We show that Arnol
 di orthogonalization fixes the problem.\n\nIt's remarkable how widely this
  trick is applicable.  Half a dozen examples will be presented.\n\nThis is
  joint work with Pablo Brubeck and Yuji Nakatsukasa.\n
LOCATION:https://researchseminars.org/talk/E-NLA/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Martin Gander (University of Geneva)
DTSTART:20200610T140000Z
DTEND:20200610T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/8
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/8/">A 
 Linear Algebra Approach to Time Parallelization: Parareal\, ParaExp\, Para
 Diag\, ParaOpt and ParaStieltjes</a>\nby Martin Gander (University of Gene
 va) as part of E-NLA - Online seminar series on numerical linear algebra\n
 \n\nAbstract\nTime parallelization has been a very active research area ov
 er the past decade. This is due to so massively parallel computer architec
 tures that parallelization in the spatial direction rarely suffices to tak
 e full advantage of such systems when solving evolution problems. Time par
 allelization is however quite different from spatial parallelization\, sin
 ce information only propagates forward in time\, never backward. Time para
 llelization algorithms are often derived at the PDE level\, but whenever t
 hey are used\, they take the form of solvers for linear algebra problems. 
 I will give in my presentation an introduction to such algorithms at the l
 inear algebra level\, starting with two simple but typical model problems\
 , namely a heat equation and a transport equation. At the linear algebra l
 evel\, these two problems look deceivingly similar\, but time parallel alg
 orithms need different features when solving one or the other in parallel.
  I will explain the reason for this at the linear algebra level\, and then
  show how Parareal\, ParaExp and ParaDiag address them. If time permits\, 
 I will also briefly explain the newer classes of ParaOpt and ParaStieltjes
  algorithms.\n
LOCATION:https://researchseminars.org/talk/E-NLA/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mark Embree (Virginia Tech)
DTSTART:20200617T140000Z
DTEND:20200617T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/9
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/9/">Co
 ntour Integral Methods for Nonlinear Eigenvalue Problems: A Systems Theory
  Perspective</a>\nby Mark Embree (Virginia Tech) as part of E-NLA - Online
  seminar series on numerical linear algebra\n\n\nAbstract\nContour integra
 l methods for nonlinear eigenvalue problems seek to compute a subset of th
 e spectrum in a bounded region of the complex plane. We briefly survey thi
 s class of algorithms\, establishing a relationship to system realization 
 techniques in control theory. This connection motivates new contour integr
 al methods that build on recent developments in rational interpolation of 
 dynamical systems. The resulting techniques\, which replace the usual Hank
 el matrices with Loewner matrix pencils\,  incorporate general interpolati
 on schemes and permit ready recovery of eigenvectors.  Numerical examples 
 illustrate the potential of this approach.\n\nThis talk describes joint wo
 rk with Michael Brennan (MIT) and Serkan Gugercin (Virginia Tech).\n
LOCATION:https://researchseminars.org/talk/E-NLA/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:James Demmel (University of California at Berkeley)
DTSTART:20200624T140000Z
DTEND:20200624T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/10
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/10/">C
 ommunication-Avoiding Algorithms for Linear Algebra\, Machine Learning\, a
 nd Beyond</a>\nby James Demmel (University of California at Berkeley) as p
 art of E-NLA - Online seminar series on numerical linear algebra\n\n\nAbst
 ract\nAlgorithms have two costs: arithmetic and communication\, i.e. movin
 g data between levels of a memory hierarchy or processors over a network. 
 Communication costs (measured in time or energy per operation) already gre
 atly exceed arithmetic costs\, and the gap is growing over time following 
 technological trends. Thus our goal is to design algorithms that minimize 
 communication. We present new algorithms that communicate asymptotically l
 ess than their classical counterparts\, for a variety of linear algebra an
 d machine learning problems\, demonstrating large speedups on a variety of
  architectures. Some of these algorithms attain provable lower bounds on c
 ommunication. We describe generalizations of these bounds\, and optimal al
 gorithms\, to arbitrary code that can be expressed as nested loops accessi
 ng arrays\, and to account for arrays having different precisions.\n
LOCATION:https://researchseminars.org/talk/E-NLA/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tamara G. Kolda (Sandia National Laboratories)
DTSTART:20200701T140000Z
DTEND:20200701T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/11
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/11/">P
 ractical Leverage-Based Sampling for Low-Rank Tensor Decomposition</a>\nby
  Tamara G. Kolda (Sandia National Laboratories) as part of E-NLA - Online 
 seminar series on numerical linear algebra\n\n\nAbstract\nConventional alg
 orithms for finding low-rank canonical polyadic (CP) tensor decompositions
  are unwieldy for large sparse tensors. The CP decomposition can be comput
 ed by solving a sequence of overdetermined least problems with special Kha
 tri-Rao structure. In this work\, we present an application of randomized 
 algorithms to fitting the CP decomposition of sparse tensors\, solving a s
 ignificantly smaller sampled least squares problem at each iteration with 
 probabilistic guarantees on the approximation errors. Prior work has shown
  that sketching is effective in the dense case\, but the prior approach ca
 nnot be applied to the sparse case because a fast Johnson-Lindenstrauss tr
 ansform (e.g.\, using a fast Fourier transform) must be applied in each mo
 de\, causing the sparse tensor to become dense. Instead\, we perform sketc
 hing through leverage score sampling\, crucially relying on the fact that 
 the structure of the Khatri-Rao product allows sampling from overestimates
  of the leverage scores without forming the full product or the correspond
 ing probabilities. Naïve application of leverage score sampling is ineffe
 ctive because we often have cases where a few scores are quite large\, so 
 we propose a novel hybrid of deterministic and random leverage-score sampl
 ing which consistently yields improved fits. Numerical results on real-wor
 ld large-scale tensors show the method is significantly faster than compet
 ing methods without sacrificing accuracy.  This is joint work with Brett L
 arsen\, Stanford University.\n
LOCATION:https://researchseminars.org/talk/E-NLA/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Laura Grigori (INRIA Paris)
DTSTART:20200708T140000Z
DTEND:20200708T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/12
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/12/">C
 ommunication avoiding low rank matrix approximation\, an unified perspecti
 ve on deterministic and randomized approaches</a>\nby Laura Grigori (INRIA
  Paris) as part of E-NLA - Online seminar series on numerical linear algeb
 ra\n\n\nAbstract\nIn this talk we present an unified perspective on determ
 inistic and randomized approaches for computing the low rank approximation
  of a matrix. We survey recent approaches that allow to minimize communica
 tion and discuss a generalized LU factorization that allows to unify sever
 al existing algorithms. For this factorization we present an improved anal
 ysis which combines deterministic guarantees with sketching ensembles sati
 sfying Johnson-Lindenstrauss properties. We then extend some of the algori
 thms to computing the low rank approximation of a tensor by using HOSVD wh
 ile also avoiding communication.\n
LOCATION:https://researchseminars.org/talk/E-NLA/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Christian Lubich (University of Tübingen)
DTSTART:20200715T140000Z
DTEND:20200715T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/13
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/13/">D
 ynamical low-rank approximation</a>\nby Christian Lubich (University of T
 übingen) as part of E-NLA - Online seminar series on numerical linear alg
 ebra\n\n\nAbstract\nThis talk reviews differential equations and their num
 erical solution on manifolds of low-rank matrices or of tensors with a ran
 k structure such as tensor trains or general tree tensor networks. These l
 ow-rank differential equations serve to approximate\, in a data-compressed
  format\, large time-dependent matrices and tensors or multivariate functi
 ons that are either given explicitly via their increments or are unknown s
 olutions to high-dimensional evolutionary differential equations\, with mu
 lti-particle time-dependent Schrödinger equations and kinetic equations s
 uch as Vlasov equations as noteworthy examples of applications.\n\nRecentl
 y developed numerical time integrators are  based on splitting the project
 ion onto the tangent space of the low-rank manifold at the current approxi
 mation. In contrast to all standard integrators\, these projector-splittin
 g methods are robust to the unavoidable presence of small singular values 
 in the low-rank approximation. This robustness relies on exploiting geomet
 ric properties of the manifold of low-rank matrices or tensors: in each su
 bstep of the projector-splitting algorithm\, the approximation moves along
  a flat subspace of the low-rank manifold. In this way\, high curvature du
 e to small singular values does no harm.\n\nThis talk is based on work don
 e intermittently over the last decade with Othmar Koch\, Bart Vandereycken
 \, Ivan Oseledets\, Emil Kieri\, Hanna Walach and Gianluca Ceruti.\n
LOCATION:https://researchseminars.org/talk/E-NLA/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael Ng (University of Hong Kong)
DTSTART:20200722T140000Z
DTEND:20200722T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/14
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/14/">N
 onnegative low rank matrix approximation and its applications</a>\nby Mich
 ael Ng (University of Hong Kong) as part of E-NLA - Online seminar series 
 on numerical linear algebra\n\n\nAbstract\nIn this talk\, we study low ran
 k matrix approximation (NLRM) for nonnegative matrices arising from many d
 ata mining and pattern recognition applications. Our approach is different
  from classical nonnegative matrix factorization (NMF) which has been stud
 ied for some time. For a given nonnegative matrix\, the usual NMF approach
  is to determine two nonnegative low rank matrices such that the distance 
 between their product and the given nonnegative matrix is as small as poss
 ible. However\, the proposed NLRM approach is to determine a nonnegative l
 ow rank matrix such that the distance between such matrix and the given no
 nnegative matrix is as small as possible. There are two advantages. (i) Th
 e minimized distance can be smaller. (ii) The proposed method can identify
  important singular basis vectors\, while this information may not be obta
 ined in the classical NMF. Numerical results are reported to demonstrate t
 he performance of the proposed method. Several extensions and research wor
 ks are also presented.\n\nThis talk describes joint work with Tai-Xiang Ji
 ang (Southwestern University of Finance and Economics)\, JunJun Pan (Unive
 rsite de Mons)\, Guang-Jing Song (Weifang University) and Hong Zhu (Jiangs
 u University).\n
LOCATION:https://researchseminars.org/talk/E-NLA/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Keyes (KAUST)
DTSTART:20200909T140000Z
DTEND:20200909T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/15
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/15/">D
 ata-sparse Linear Algebra Algorithms for Large-scale Applications on Emerg
 ing Architectures</a>\nby David Keyes (KAUST) as part of E-NLA - Online se
 minar series on numerical linear algebra\n\n\nAbstract\nA traditional goal
  of algorithmic optimality\, squeezing out operations\, has been supersede
 d because of evolution in architecture. Algorithms must now squeeze memory
 \, data transfers\, and synchronizations\, while extra operations on local
 ly cached data cost relatively little time or energy. Hierarchically low-r
 ank matrices realize a rarely achieved combination of optimal storage comp
 lexity and high-computational intensity in approximating a wide class of f
 ormally dense operators that arise in exascale applications. They may be r
 egarded as algebraic generalizations of the fast multipole method. Methods
  based on hierarchical tree-based data structures and their simpler cousin
 s\, tile low-rank matrices\, are well suited for early exascale architectu
 res\, which are provisioned for high processing power relative to memory c
 apacity and memory bandwidth. These data-sparse algorithms are ushering in
  a renaissance of numerical linear algebra. We describe modules of a softw
 are toolkit\, Hierarchical Computations on Manycore Architectures (HiCMA)\
 , that illustrate these features on several applications. Early modules of
  this open-source project are distributed in software libraries of major v
 endors. A recent addition\, H2Opus\, extends H2 hierarchical matrix operat
 ions to distributed memory and GPUs.\n
LOCATION:https://researchseminars.org/talk/E-NLA/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Elisabeth Ullmann (TU Munich)
DTSTART:20200916T140000Z
DTEND:20200916T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/16
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/16/">A
 pproximation of parametric covariance matrices</a>\nby Elisabeth Ullmann (
 TU Munich) as part of E-NLA - Online seminar series on numerical linear al
 gebra\n\n\nAbstract\nCovariance operators model the spatial\, temporal or 
 other correlation between collections of random variables. In modern appli
 cations these random variables are often associated with an infinite-dimen
 sional or high-dimensional function space. Examples are the solution of a 
 partial differential equation with random coefficients in uncertainty quan
 tification (UQ)\, or Gaussian process regression in machine learning. When
  a suitable discretization of the function space has been applied\, the di
 scretized covariance operator becomes a very large matrix - the covariance
  matrix - with a size that is of the order of the dimension of the discret
 e space squared.\n\nCovariance matrices are naturally symmetric and positi
 ve semi-definite\, but in the applications we are interested in\, they are
  typically dense. To avoid the enormous cost of creating and handling thes
 e dense matrices\, efficient low-rank approximations such as the pivoted C
 holesky decomposition\, or the adaptive cross approximation (ACA) have bee
 n developed during the last decade.\n\nBut the story does not end here sin
 ce recently\, the attention has shifted to parameterized covariance operat
 ors. This is due to their increased modeling capacity\, e.g.\, in Bayesian
  inverse problems or Gaussian process regression with hyperparameters in m
 achine learning. Now we are faced with the task to approximate a parametri
 c covariance matrix where the parameter itself is a random process. Simply
  repeating the ACA or pivoted Cholesky decomposition for different paramet
 er values is inefficient and most certainly too expensive in practise.\n\n
 We introduce and study two algorithms for the approximation of parametric 
 families of covariance matrices. The first approach is a (non-certified) a
 pproximation\, and employs a reduced basis associated with a collection of
  eigenvectors for specific parameter values. The second approach is a cert
 ified extension of the ACA where the approximation error is controlled in 
 the Wasserstein-2 distance of two Gaussian measures. Both approaches rely 
 on an affine linear expansion of the covariance operator with respect to t
 he parameter. This keeps the computational cost under control. Notably\, b
 oth algorithms do not require regular meshes in the covariance operator di
 scretization and can be used on irregular domains.\n\nThis talk describes 
 joint work with Daniel Kressner (EPFL)\, Jonas Latz (University of Cambrid
 ge)\, Stefano Massei (TU/e) and Marvin Eisenberger (TUM).\n
LOCATION:https://researchseminars.org/talk/E-NLA/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Howard Elman (University of Maryland)
DTSTART:20200930T140000Z
DTEND:20200930T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/17
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/17/">M
 ultigrid Methods for Computing Low-Rank Solutions to Parameter-Dependent P
 artial Differential Equations</a>\nby Howard Elman (University of Maryland
 ) as part of E-NLA - Online seminar series on numerical linear algebra\n\n
 \nAbstract\nThe collection of solutions of discrete parameter-dependent pa
 rtial differential equations often takes the form of a low-rank matrix. We
  show that in this scenario\, iterative algorithms for computing these sol
 utions can take advantage of low-rank structure to reduce both computation
 al effort and memory requirements. Implementation of such solvers requires
  that explicit rank-compression computations be done to truncate the ranks
  of intermediate quantities that must be computed. We prove that when trun
 cation strategies are used as part of a multigrid solver\, the resulting a
 lgorithms retain "textbook" (grid-independent) convergence rates\, and we 
 demonstrate how the truncation criteria affect convergence behavior. In ad
 dition\, we show that these techniques can be used to construct efficient 
 solution algorithms for computing the eigenvalues of parameter-dependent o
 perators. In this setting\, parameterized eigenvectors can be grouped into
  matrices of low-rank structure\, and we introduce a variant of inverse su
 bspace iteration for computing them.  We demonstrate the utility of this a
 pproach on two benchmark problems\, a stochastic diffusion problem with so
 me poorly separated eigenvalues\, and an operator derived from a discrete 
 Stokes problem whose minimal eigenvalue is related to the inf-sup stabilit
 y constant.\n\nThis is joint work with Tengfei Su.\n
LOCATION:https://researchseminars.org/talk/E-NLA/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sherry Li (Lawrence Berkeley National Laboratory)
DTSTART:20201014T140000Z
DTEND:20201014T150000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/18
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/18/">A
 utotuning exascale applications with Gaussian process regression</a>\nby S
 herry Li (Lawrence Berkeley National Laboratory) as part of E-NLA - Online
  seminar series on numerical linear algebra\n\n\nAbstract\nSignificant eff
 ort has been invested to develop highly scalable numerical libraries and h
 igh-fidelity modeling and simulation for the upcoming exascale computers. 
 These codes typically involve many parameters which need to be selected pr
 operly to optimize performance on the underlying parallel machine. They ar
 e also expensive to run and thus have limited "function evaluation" values
 \, which post significant challenges to efficient performance tuning on di
 verse architectures.\n\nBayesian optimization with Gaussian process regres
 sion is an attractive machine learning framework to build surrogate models
  with limited function evaluation points. In order to fully utilize all th
 e available data\, we leverage multitask learning and multi-armed bandit s
 trategies to build a more advanced Bayesian optimization framework.\n\nWe 
 have developed an open-source software tool\, called GPTune\, for optimizi
 ng expensive large-scale HPC codes. We will show several features of GPTun
 e\, e.g.\, incorporation of coarse performance models to improve the Bayes
 ian model\, multi-objective tuning such as tuning a hybrid of time\, memor
 y and accuracy\, and reuse of historical data base for model portability.\
 n\nWe will demonstrate the efficiency and effectiveness of GPTune when it 
 is applied to numerical linear algebra libraries\, such as ScaLAPACK\, Sup
 erLU and Hypre\, as well as fusion simulation codes M3D-C1 and NIMROD.\n\n
 This talk describes joint work with James Demmel\, Yang Liu\, Osni Marques
 \, Wissam Sid-Lakhdar and Xianran Zhu\n
LOCATION:https://researchseminars.org/talk/E-NLA/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yuji Nakatsukasa (University of Oxford)
DTSTART:20201028T150000Z
DTEND:20201028T160000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/19
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/19/">F
 ast and stable randomized low-rank matrix approximation</a>\nby Yuji Nakat
 sukasa (University of Oxford) as part of E-NLA - Online seminar series on 
 numerical linear algebra\n\n\nAbstract\nRandomized SVD has become an extre
 mely successful approach for efficiently computing a low-rank approximatio
 n of matrices. In particular the paper by Halko\, Martinsson\, and Tropp (
 SIREV 2011) contains extensive analysis\, and has made it a very popular m
 ethod. The typical complexity for a rank-r approximation of mxn matrices i
 s O(mnlog n+(m+n)r^2) for dense matrices. The classical Nystrom method is 
 much faster\, but only applicable to positive semidefinite matrices. This 
 work studies a generalization of Nystrom's method applicable to general ma
 trices\, and shows that (i) it has near-optimal approximation quality comp
 arable to competing methods\, (ii) the computational cost is the near-opti
 mal O(mnlog n+r^3) for dense matrices\, with small hidden constants\, and 
 (iii) crucially\, it can be implemented in a numerically stable fashion de
 spite the presence of an ill-conditioned pseudoinverse. Numerical experime
 nts illustrate that generalized Nystrom can significantly outperform state
 -of-the-art methods\, especially when r>>1\, achieving up to a 10-fold spe
 edup. The method is also well suited to updating and downdating the matrix
 .\n
LOCATION:https://researchseminars.org/talk/E-NLA/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Des Higham (University of Edinburgh)
DTSTART:20201111T150000Z
DTEND:20201111T160000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/20
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/20/">C
 oncepts and Algorithms for Higher Order Networks: Beyond Pairwise Interact
 ions</a>\nby Des Higham (University of Edinburgh) as part of E-NLA - Onlin
 e seminar series on numerical linear algebra\n\n\nAbstract\nNetwork scient
 ists have shown that there is great value in studying pairwise interaction
 s between components in a system. From a linear algebra point of view\, th
 is involves defining and evaluating functions of the associated adjacency 
 matrix. Recently\, there has been increased interest in the idea of accoun
 ting directly for higher order features. Such features may be built from t
 he adjacency matrix---for example\, a triangle involving nodes i\, j and k
  arises when the three edges\, i<->j\, j<->k and k<->i are present. In oth
 er contexts\, higher order information appears explicitly---for example\, 
 in a coauthorship network\, a document involving three authors forms a nat
 ural triangle. I will discuss the use of tensor-based definitions and algo
 rithms to exploit such higher order information. The algorithms also incor
 porate nonlinearities that  increase flexibility. I will focus on spectral
  methods that extend classical concepts of node centrality and clustering 
 coefficients. The underlying object of study will be a constrained nonline
 ar eigenvalue problem associated with a tensor. Using recent results from 
 nonlinear Perron--Frobenius theory\, we can establish existence and unique
 ness under mild conditions\, and show that such spectral measures can be c
 omputed efficiently and robustly with a nonlinear power method.\n\nThe tal
 k is based on joint work with Francesca Arrigo (University of Strathclyde)
  and Francesco Tudisco (Gran Sasso Science Institute).\n
LOCATION:https://researchseminars.org/talk/E-NLA/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Francoise Tisseur (Manchester University)
DTSTART:20201125T150000Z
DTEND:20201125T160000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/21
DESCRIPTION:by Francoise Tisseur (Manchester University) as part of E-NLA 
 - Online seminar series on numerical linear algebra\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/E-NLA/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chen Greif (The University of British Columbia)
DTSTART:20201209T150000Z
DTEND:20201209T160000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/22
DESCRIPTION:by Chen Greif (The University of British Columbia) as part of 
 E-NLA - Online seminar series on numerical linear algebra\n\nAbstract: TBA
 \n
LOCATION:https://researchseminars.org/talk/E-NLA/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Anne Greenbaum (University of Washington)
DTSTART:20210113T150000Z
DTEND:20210113T160000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/23
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/23/">S
 pectral Sets: Numerical Range and Beyond</a>\nby Anne Greenbaum (Universit
 y of Washington) as part of E-NLA - Online seminar series on numerical lin
 ear algebra\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/E-NLA/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nicolas Gillis (University of Mons)
DTSTART:20210127T150000Z
DTEND:20210127T160000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/24
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/24/">I
 dentifiability and Computation of Nonnegative Matrix Factorizations</a>\nb
 y Nicolas Gillis (University of Mons) as part of E-NLA - Online seminar se
 ries on numerical linear algebra\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/E-NLA/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jim Nagy (Emory University)
DTSTART:20210210T150000Z
DTEND:20210210T160000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/25
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/25/">K
 rylov Subspace Regularization for Inverse Problems</a>\nby Jim Nagy (Emory
  University) as part of E-NLA - Online seminar series on numerical linear 
 algebra\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/E-NLA/25/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Erin Carson (Charles University)
DTSTART:20210224T150000Z
DTEND:20210224T160000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/26
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/26/">W
 hat do we know about block Gram-Schmidt?</a>\nby Erin Carson (Charles Univ
 ersity) as part of E-NLA - Online seminar series on numerical linear algeb
 ra\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/E-NLA/26/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Gunnar Martinsson (UT Austin)
DTSTART:20210310T150000Z
DTEND:20210310T160000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/27
DESCRIPTION:by Gunnar Martinsson (UT Austin) as part of E-NLA - Online sem
 inar series on numerical linear algebra\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/E-NLA/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Cameron Musco (University of Massachusetts Amherst)
DTSTART:20210324T150000Z
DTEND:20210324T160000Z
DTSTAMP:20260422T225841Z
UID:E-NLA/28
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/28/">H
 utch++: Optimal Stochastic Trace Estimation</a>\nby Cameron Musco (Univers
 ity of Massachusetts Amherst) as part of E-NLA - Online seminar series on 
 numerical linear algebra\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/E-NLA/28/
END:VEVENT
END:VCALENDAR
