BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Elisabeth Ullmann (TU Munich)
DTSTART:20200916T140000Z
DTEND:20200916T150000Z
DTSTAMP:20260423T041509Z
UID:E-NLA/16
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/E-NLA/16/">A
 pproximation of parametric covariance matrices</a>\nby Elisabeth Ullmann (
 TU Munich) as part of E-NLA - Online seminar series on numerical linear al
 gebra\n\n\nAbstract\nCovariance operators model the spatial\, temporal or 
 other correlation between collections of random variables. In modern appli
 cations these random variables are often associated with an infinite-dimen
 sional or high-dimensional function space. Examples are the solution of a 
 partial differential equation with random coefficients in uncertainty quan
 tification (UQ)\, or Gaussian process regression in machine learning. When
  a suitable discretization of the function space has been applied\, the di
 scretized covariance operator becomes a very large matrix - the covariance
  matrix - with a size that is of the order of the dimension of the discret
 e space squared.\n\nCovariance matrices are naturally symmetric and positi
 ve semi-definite\, but in the applications we are interested in\, they are
  typically dense. To avoid the enormous cost of creating and handling thes
 e dense matrices\, efficient low-rank approximations such as the pivoted C
 holesky decomposition\, or the adaptive cross approximation (ACA) have bee
 n developed during the last decade.\n\nBut the story does not end here sin
 ce recently\, the attention has shifted to parameterized covariance operat
 ors. This is due to their increased modeling capacity\, e.g.\, in Bayesian
  inverse problems or Gaussian process regression with hyperparameters in m
 achine learning. Now we are faced with the task to approximate a parametri
 c covariance matrix where the parameter itself is a random process. Simply
  repeating the ACA or pivoted Cholesky decomposition for different paramet
 er values is inefficient and most certainly too expensive in practise.\n\n
 We introduce and study two algorithms for the approximation of parametric 
 families of covariance matrices. The first approach is a (non-certified) a
 pproximation\, and employs a reduced basis associated with a collection of
  eigenvectors for specific parameter values. The second approach is a cert
 ified extension of the ACA where the approximation error is controlled in 
 the Wasserstein-2 distance of two Gaussian measures. Both approaches rely 
 on an affine linear expansion of the covariance operator with respect to t
 he parameter. This keeps the computational cost under control. Notably\, b
 oth algorithms do not require regular meshes in the covariance operator di
 scretization and can be used on irregular domains.\n\nThis talk describes 
 joint work with Daniel Kressner (EPFL)\, Jonas Latz (University of Cambrid
 ge)\, Stefano Massei (TU/e) and Marvin Eisenberger (TUM).\n
LOCATION:https://researchseminars.org/talk/E-NLA/16/
END:VEVENT
END:VCALENDAR
