BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:S. Leveque\, R. Khan\, Y. Ma
DTSTART:20251119T080000Z
DTEND:20251119T090000Z
DTSTAMP:20260422T122259Z
UID:MathMAC/44
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/MathMAC/44/"
 >Semester seminar of junior researchers</a>\nby S. Leveque\, R. Khan\, Y. 
 Ma as part of Modelling of materials - theory\, model reduction and effici
 ent numerical methods (UNCE MathMAC)\n\n\nAbstract\n<b>Santolo Leveque</b>
 <p>\n<b>An Augmented Lagrangian preconditioner for the control of the Navi
 er–Stokes equations</b><br>\nOptimal control problems with PDEs as const
 raints arise very often in scientific and industrial applications. Due to 
 the difficulties arising in their numerical solution\, researchers have pu
 t a great effort into devising robust solvers for this class of problems. 
 An example of a highly challenging problem attracting significant attentio
 n is the distributed control of incompressible viscous fluid flow problems
 . In this case\, the physics is described by the incompressible Navier–S
 tokes equations. Since the PDEs given in the constraints are nonlinear\, i
 n order to obtain a solution of Navier–Stokes control problems one has t
 o iteratively solve linearizations of the problems until a prescribed tole
 rance on the non-linear residual is achieved. In this talk\, we present ef
 ficient and robust preconditioned iterative methods for the solution of th
 e stationary incompressible Navier–Stokes control problem\, when employi
 ng an inexact Newton linearization of the first-order optimality condition
 s. The iterative solver is based on an augmented Lagrangian preconditioner
 . By employing saddle-point theory\, we derive suitable approximations of 
 the (1\,1)-block and the Schur complement. Numerical experiments show the 
 effectiveness and robustness of our approach\, for a range of problem para
 meters.<p>\n\n\n<b>Ritesh Khan</b><p>\n<b>Accelerating Dense Matrix Comput
 ations Using Hierarchical Matrices</b><br>\nDense matrices arise frequentl
 y across many areas\, such as PDEs\, inverse problems\, integral equations
 \, machine learning\, kernel methods\, etc. In many practical applications
 \, these dense matrices can be very large\, making matrix operations invol
 ving them quite challenging. For example\, the direct evaluation of the de
 nse matrix-vector product in the potential theory requires O(N^2) operatio
 ns and solving a dense linear system using naive direct methods (such as L
 U) requires O(N^3) operations. Both operations become computationally proh
 ibitive for large N. To address this\, large dense matrices are usually ap
 proximated using block low-rank representations\, commonly known as hierar
 chical matrices. In this talk\, I will discuss different types of hierarch
 ical matrices and how they can be used to design fast and scalable solvers
 . I’ll also show a few interesting applications that highlight the power
  of hierarchical matrices.<p>\n\n<b>Yuxin Ma</b><p>\n<b>On a shrink-and-ex
 pand technique for symmetric block eigensolvers</b><br>\nIn symmetric bloc
 k eigenvalue algorithms\, such as the subspace iteration algorithm and the
  locally optimal block preconditioned conjugate gradient (LOBPCG) algorith
 m\, a large block size is often employed to achieve robustness and rapid c
 onvergence. However\, using a large block size also increases the computat
 ional cost. Traditionally\, the block size is typically reduced after conv
 ergence of some eigenpairs\, known as deflation. In this work\, we propose
  a non-deflation-based\, more aggressive technique\, where the block size 
 is adjusted dynamically during the algorithm. This technique can be applie
 d to a wide range of block eigensolvers\, reducing computational cost with
 out compromising convergence speed. We present three adaptive strategies f
 or adjusting the block size\, and apply them to four well-known eigensolve
 rs as examples. Detailed theoretical analysis and numerical experiments ar
 e provided to illustrate the efficiency of the proposed technique. In prac
 tice\, an overall acceleration of 20% to 30% is observed.<p>\n
LOCATION:https://researchseminars.org/talk/MathMAC/44/
END:VEVENT
END:VCALENDAR
