Approximation of parametric covariance matrices

Elisabeth Ullmann (TU Munich)

16-Sep-2020, 14:00-15:00 (5 years ago)

Abstract: Covariance operators model the spatial, temporal or other correlation between collections of random variables. In modern applications these random variables are often associated with an infinite-dimensional or high-dimensional function space. Examples are the solution of a partial differential equation with random coefficients in uncertainty quantification (UQ), or Gaussian process regression in machine learning. When a suitable discretization of the function space has been applied, the discretized covariance operator becomes a very large matrix - the covariance matrix - with a size that is of the order of the dimension of the discrete space squared.

Covariance matrices are naturally symmetric and positive semi-definite, but in the applications we are interested in, they are typically dense. To avoid the enormous cost of creating and handling these dense matrices, efficient low-rank approximations such as the pivoted Cholesky decomposition, or the adaptive cross approximation (ACA) have been developed during the last decade.

But the story does not end here since recently, the attention has shifted to parameterized covariance operators. This is due to their increased modeling capacity, e.g., in Bayesian inverse problems or Gaussian process regression with hyperparameters in machine learning. Now we are faced with the task to approximate a parametric covariance matrix where the parameter itself is a random process. Simply repeating the ACA or pivoted Cholesky decomposition for different parameter values is inefficient and most certainly too expensive in practise.

We introduce and study two algorithms for the approximation of parametric families of covariance matrices. The first approach is a (non-certified) approximation, and employs a reduced basis associated with a collection of eigenvectors for specific parameter values. The second approach is a certified extension of the ACA where the approximation error is controlled in the Wasserstein-2 distance of two Gaussian measures. Both approaches rely on an affine linear expansion of the covariance operator with respect to the parameter. This keeps the computational cost under control. Notably, both algorithms do not require regular meshes in the covariance operator discretization and can be used on irregular domains.

This talk describes joint work with Daniel Kressner (EPFL), Jonas Latz (University of Cambridge), Stefano Massei (TU/e) and Marvin Eisenberger (TUM).

computational engineering, finance, and sciencenumerical analysis

Audience: researchers in the topic


E-NLA - Online seminar series on numerical linear algebra

Series comments: E-NLA is an online seminar series dedicated to topics in Numerical Linear Algebra. Talks take place on Wednesdays at 4pm (Central European Time) via Zoom and are initially scheduled on a weekly basis.

To join the seminar, please complete the sign up form at the bottom of the webpage. Information about how to connect to the conference call will be circulated via email to all registered attendees.

Organizers: Melina Freitag, Stefan Güttel, Daniel Kressner, Jörg Liesen, Valeria Simoncini, Alex Townsend, Bart Vandereycken*
*contact for this listing

Export talk to