Semester seminar of junior researchers

S. Leveque, R. Khan, Y. Ma

19-Nov-2025, 08:00-09:00 (2 months ago)

Abstract: Santolo Leveque

An Augmented Lagrangian preconditioner for the control of the Navier–Stokes equations
Optimal control problems with PDEs as constraints arise very often in scientific and industrial applications. Due to the difficulties arising in their numerical solution, researchers have put a great effort into devising robust solvers for this class of problems. An example of a highly challenging problem attracting significant attention is the distributed control of incompressible viscous fluid flow problems. In this case, the physics is described by the incompressible Navier–Stokes equations. Since the PDEs given in the constraints are nonlinear, in order to obtain a solution of Navier–Stokes control problems one has to iteratively solve linearizations of the problems until a prescribed tolerance on the non-linear residual is achieved. In this talk, we present efficient and robust preconditioned iterative methods for the solution of the stationary incompressible Navier–Stokes control problem, when employing an inexact Newton linearization of the first-order optimality conditions. The iterative solver is based on an augmented Lagrangian preconditioner. By employing saddle-point theory, we derive suitable approximations of the (1,1)-block and the Schur complement. Numerical experiments show the effectiveness and robustness of our approach, for a range of problem parameters.

Ritesh Khan

Accelerating Dense Matrix Computations Using Hierarchical Matrices
Dense matrices arise frequently across many areas, such as PDEs, inverse problems, integral equations, machine learning, kernel methods, etc. In many practical applications, these dense matrices can be very large, making matrix operations involving them quite challenging. For example, the direct evaluation of the dense matrix-vector product in the potential theory requires O(N^2) operations and solving a dense linear system using naive direct methods (such as LU) requires O(N^3) operations. Both operations become computationally prohibitive for large N. To address this, large dense matrices are usually approximated using block low-rank representations, commonly known as hierarchical matrices. In this talk, I will discuss different types of hierarchical matrices and how they can be used to design fast and scalable solvers. I’ll also show a few interesting applications that highlight the power of hierarchical matrices.

Yuxin Ma

On a shrink-and-expand technique for symmetric block eigensolvers
In symmetric block eigenvalue algorithms, such as the subspace iteration algorithm and the locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm, a large block size is often employed to achieve robustness and rapid convergence. However, using a large block size also increases the computational cost. Traditionally, the block size is typically reduced after convergence of some eigenpairs, known as deflation. In this work, we propose a non-deflation-based, more aggressive technique, where the block size is adjusted dynamically during the algorithm. This technique can be applied to a wide range of block eigensolvers, reducing computational cost without compromising convergence speed. We present three adaptive strategies for adjusting the block size, and apply them to four well-known eigensolvers as examples. Detailed theoretical analysis and numerical experiments are provided to illustrate the efficiency of the proposed technique. In practice, an overall acceleration of 20% to 30% is observed.

Computer scienceMathematics

Audience: researchers in the topic


Modelling of materials - theory, model reduction and efficient numerical methods (UNCE MathMAC)

Organizers: Josef Málek*, Karel Tůma*, Anna Balci*
*contact for this listing

Export talk to