BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Nathan Kutz (University of Washington)
DTSTART:20200701T150000Z
DTEND:20200701T151000Z
DTSTAMP:20260422T212750Z
UID:SciDL/1
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/SciDL/1/">Op
 ening remarks</a>\nby Nathan Kutz (University of Washington) as part of Wo
 rkshop on Scientific-Driven Deep Learning (SciDL)\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/SciDL/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:George Em Karniadakis (Brown University)
DTSTART:20200701T151000Z
DTEND:20200701T160000Z
DTSTAMP:20260422T212750Z
UID:SciDL/2
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/SciDL/2/">De
 epOnet: Learning nonlinear operators based on the universal approximation 
 theorem of operators</a>\nby George Em Karniadakis (Brown University) as p
 art of Workshop on Scientific-Driven Deep Learning (SciDL)\n\n\nAbstract\n
 It is widely known that neural networks (NNs) are universal approximators 
 of continuous functions\, however\, a less known but powerful result is th
 at a NN with a single hidden layer can approximate accurately any nonlinea
 r continuous operator. This universal approximation theorem of operators i
 s suggestive of the potential of NNs in learning from scattered data any c
 ontinuous operator or complex system. To realize this theorem\, we design 
 a new NN with small generalization error\, the deep operator network (Deep
 ONet)\, consisting of a NN for encoding the discrete input function space 
 (branch net) and another NN for encoding the domain of the output function
 s (trunk net). We demonstrate that DeepONet can learn various explicit ope
 rators\, e.g.\, integrals and fractional Laplacians\, as well as implicit 
 operators that represent deterministic and stochastic differential equatio
 ns. We study\, in particular\, different formulations of the input functio
 n space and its effect on the generalization error.\n
LOCATION:https://researchseminars.org/talk/SciDL/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Frank Noe (FU Berlin)
DTSTART:20200701T160000Z
DTEND:20200701T162500Z
DTSTAMP:20260422T212750Z
UID:SciDL/3
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/SciDL/3/">Pa
 uliNet: Deep neural network solution of the electronic Schrödinger Equati
 on</a>\nby Frank Noe (FU Berlin) as part of Workshop on Scientific-Driven 
 Deep Learning (SciDL)\n\n\nAbstract\nThe electronic Schrödinger equation 
 describes fundamental properties of molecules and materials\, but can only
  be solved analytically for the hydrogen atom. The numerically exact full 
 configuration-interaction method is exponentially expensive in the number 
 of electrons. Quantum Monte Carlo is a possible way out: it scales well to
  large molecules\, can be parallelized\, and its accuracy has\, as yet\, o
 nly been limited by the flexibility of the used wave function ansatz. Here
  we propose PauliNet\, a deep-learning wave function ansatz that achieves 
 nearly exact solutions of the electronic Schrödinger equation. PauliNet h
 as a multireference Hartree-Fock solution built in as a baseline\, incorpo
 rates the physics of valid wave functions\, and is trained using variation
 al quantum Monte Carlo (VMC). PauliNet outperforms comparable state-of-the
 -art VMC ansatzes for atoms\, diatomic molecules and a strongly-correlated
  hydrogen chain by a margin and is yet computationally efficient. We antic
 ipate that thanks to the favourable scaling with system size\, this method
  may become a new leading method for highly accurate electronic-strucutre 
 calculations on medium-sized molecular systems.\n
LOCATION:https://researchseminars.org/talk/SciDL/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alejandro Queiruga (Google\, LLC)
DTSTART:20200701T162500Z
DTEND:20200701T165000Z
DTSTAMP:20260422T212750Z
UID:SciDL/4
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/SciDL/4/">Co
 ntinuous-in-Depth Neural Networks through Interpretation of Learned Dynami
 cs</a>\nby Alejandro Queiruga (Google\, LLC) as part of Workshop on Scient
 ific-Driven Deep Learning (SciDL)\n\n\nAbstract\nData-driven learning of d
 ynamical systems is of interest to the scientific community\, which wants 
 to recover information about the true physics from the discretized model\,
  and the machine learning community\, which wants to improve model interpr
 etability and performance. We present a refined interpretation of learned 
 dynamical models by investigating canonical systems. Recent ML literature 
 draws a metaphor between residual components of neural networks and a forw
 ard Euler time integrator\, but we show that these components actually lea
 rn a more accurate integrator. We examine\, the harmonic oscillator\, 1D w
 ave equation\, and the pendulum in two forms\, using purely linear models\
 , feed-forward shallow neural networks\, and neural networks embedded in t
 ime integrators. Each of the model configurations overfit to a better oper
 ator than commonly understood\, confounding recovery of physics and attemp
 ts to improve the algorithms. We show two analytical methods for reconstru
 cting underlying operators from linear systems. For the nonlinear problems
 \, unmodified neural networks outperform the expected numerical methods\, 
 but do not allow for inspection or generalization. Embedding the models in
  integrators such as RK4 improves performance and generalizability. Howeve
 r\, for the constrained pendulum\, the model is still better than excepted
 \, exhibiting better than expected stiffness-stability. We conclude by rev
 isiting the components of neural networks where improvements are suggested
 .\n
LOCATION:https://researchseminars.org/talk/SciDL/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael Muehlebach (UC Berkeley)
DTSTART:20200701T165000Z
DTEND:20200701T171500Z
DTSTAMP:20260422T212750Z
UID:SciDL/5
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/SciDL/5/">Op
 timization with Momentum: Dynamical\, Control-Theoretic\, and Symplectic P
 erspectives</a>\nby Michael Muehlebach (UC Berkeley) as part of Workshop o
 n Scientific-Driven Deep Learning (SciDL)\n\n\nAbstract\nMy talk will focu
 s on the analysis of accelerated first-order optimization algorithms. I wi
 ll show how the continuous dependence of the iterates with respect to thei
 r initial condition can be exploited to characterize the convergence rate.
  The result establishes criteria for accelerated convergence that are easi
 ly verifiable and applicable to a large class of first-order optimization 
 algorithms. The analysis is not restricted to the convex setting and unifi
 es discrete-time and continuous-time models. It also rigorously explains w
 hy structure-preserving discretization schemes are important for momentum-
 based algorithms.\n
LOCATION:https://researchseminars.org/talk/SciDL/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tess Smidt (LBL)
DTSTART:20200701T204000Z
DTEND:20200701T210500Z
DTSTAMP:20260422T212750Z
UID:SciDL/6
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/SciDL/6/">Eu
 clidean Neural Networks for Emulating Ab Initio Calculations and Generatin
 g Atomic Geometries</a>\nby Tess Smidt (LBL) as part of Workshop on Scient
 ific-Driven Deep Learning (SciDL)\n\n\nAbstract\nAtomic systems (molecules
 \, crystals\, proteins\, nanoclusters\, etc.) are naturally represented by
  a set of coordinates in 3D space labeled by atom type. This is a challeng
 ing representation to use for neural networks because the coordinates are 
 sensitive to 3D rotations and translations and there is no canonical orien
 tation or position for these systems. We present a general neural network 
 architecture that naturally handles 3D geometry and operates on the scalar
 \, vector\, and tensor fields that characterize physical systems. Our netw
 orks are locally equivariant to 3D rotations and translations at every lay
 er. In this talk\, we describe how the network achieves these equivariance
 s and demonstrate the capabilities of our network using simple tasks. We
 ’ll also present examples of applying Euclidean networks to applications
  in quantum chemistry and discuss techniques for using these networks to e
 ncode and decode geometry.\n
LOCATION:https://researchseminars.org/talk/SciDL/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael P. Brenner (Harvard University)
DTSTART:20200701T190000Z
DTEND:20200701T195000Z
DTSTAMP:20260422T212750Z
UID:SciDL/7
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/SciDL/7/">Ma
 chine Learning for Partial Differential Equations</a>\nby Michael P. Brenn
 er (Harvard University) as part of Workshop on Scientific-Driven Deep Lear
 ning (SciDL)\n\n\nAbstract\nI will discuss several ways in which machine l
 earning can be used for solving and understanding the solutions of nonline
 ar partial differential equations. Most of the talk will focus on learning
  discretizations for coarse graining the numerical solutions of PDEs. I wi
 ll start with examples in 1d\, and then move on to advection/diffusion in 
 a turbulent flow and then the Navier Stokes equation.\n
LOCATION:https://researchseminars.org/talk/SciDL/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Elizabeth Qian (MIT)
DTSTART:20200701T195000Z
DTEND:20200701T201500Z
DTSTAMP:20260422T212750Z
UID:SciDL/8
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/SciDL/8/">Li
 ft & Learn: Analyzable\, Generalizable Data-Driven Models for Nonlinear PD
 Es</a>\nby Elizabeth Qian (MIT) as part of Workshop on Scientific-Driven D
 eep Learning (SciDL)\n\n\nAbstract\nWe present Lift & Learn\, a physics-in
 formed method for learning low-dimensional models for nonlinear PDEs. The 
 method exploits knowledge of a system’s governing equations to identify 
 a coordinate transformation in which the system dynamics have quadratic st
 ructure. This transformation is called a lifting map because it often adds
  auxiliary variables to the system state. The lifting map is applied to da
 ta obtained by evaluating a model for the original nonlinear system. This 
 lifted data is projected onto its leading principal components\, and low-d
 imensional linear and quadratic matrix operators are fit to the lifted red
 uced data using a least-squares operator inference procedure. Analysis of 
 our method shows that the Lift & Learn models are able to capture the syst
 em physics in the lifted coordinates at least as accurately as traditional
  intrusive model reduction approaches. This preservation of system physics
  makes the Lift & Learn models robust to changes in inputs. Numerical expe
 riments on the FitzHugh-Nagumo neuron activation model and the compressibl
 e Euler equations demonstrate the generalizability of our model.\n
LOCATION:https://researchseminars.org/talk/SciDL/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Lars Ruthotto (Emory University)
DTSTART:20200701T201500Z
DTEND:20200701T204000Z
DTSTAMP:20260422T212750Z
UID:SciDL/9
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/SciDL/9/">De
 ep Neural Networks Motivated by PDEs</a>\nby Lars Ruthotto (Emory Universi
 ty) as part of Workshop on Scientific-Driven Deep Learning (SciDL)\n\n\nAb
 stract\nOne of the most promising areas in artificial intelligence is deep
  learning\, a form of machine learning that uses neural networks containin
 g many hidden layers. Recent success has led to breakthroughs in applicati
 ons such as speech and image recognition. However\, more theoretical insig
 ht is needed to create a rigorous scientific basis for designing and train
 ing deep neural networks\, increasing their scalability\, and providing in
 sight into their reasoning. This talk bridges the gap between partial diff
 erential equations (PDEs) and neural networks and presents a new mathemati
 cal paradigm that simplifies designing\, training\, and analyzing deep neu
 ral networks. It shows that training deep neural networks can be cast as a
  dynamic optimal control problem similar to path-planning and optimal mass
  transport. The talk outlines how this interpretation can improve the effe
 ctiveness of deep neural networks. First\, the talk introduces new types o
 f neural networks inspired by to parabolic\, hyperbolic\, and reaction-dif
 fusion PDEs. Second\, the talk outlines how to accelerate training by expl
 oiting reversibility properties of the underlying PDEs.\n
LOCATION:https://researchseminars.org/talk/SciDL/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yasaman Bahri (Google Brain)
DTSTART:20200701T171500Z
DTEND:20200701T174000Z
DTSTAMP:20260422T212750Z
UID:SciDL/10
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/SciDL/10/">L
 earning Dynamics of Wide\, Deep Neural Networks: Beyond the Limit of Infin
 ite Width</a>\nby Yasaman Bahri (Google Brain) as part of Workshop on Scie
 ntific-Driven Deep Learning (SciDL)\n\n\nAbstract\nWhile many practical ad
 vancements in deep learning have been made in recent years\, a scientific\
 , and ideally theoretical\, understanding of modern neural networks is sti
 ll in its infancy. At the heart of this would be to better understand the 
 learning dynamics of such systems. In a first step towards tackling this p
 roblem\, one can try to identify limits that have theoretical tractability
  and are potentially practically relevant. I’ll begin by surveying our b
 ody of work that has investigated the infinite width limit of deep network
 s. These results establish exact mappings between deep networks and other\
 , existing machine learning methods (namely\, Gaussian processes and kerne
 l methods) but with novel modifications to them that had not been previous
 ly encountered. With these exact mappings in hand\, the natural question i
 s to what extent they bear relevance to neural networks at finite width. I
 ’ll argue that the choice of learning rate is a crucial factor in dynami
 cs away from this limit and naturally classifies deep networks into two cl
 asses separated by a sharp phase transition. This is elucidated in a class
  of solvable simple models we present\, which give quantitative prediction
 s for the two phases. Quite remarkably\, we test these empirically in prac
 tical settings and find excellent agreement.\n
LOCATION:https://researchseminars.org/talk/SciDL/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Omri Azencot (UCLA)
DTSTART:20200701T210500Z
DTEND:20200701T213000Z
DTSTAMP:20260422T212750Z
UID:SciDL/11
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/SciDL/11/">R
 obust Prediction of High-Dimensional Dynamical Systems using Koopman Deep 
 Networks</a>\nby Omri Azencot (UCLA) as part of Workshop on Scientific-Dri
 ven Deep Learning (SciDL)\n\n\nAbstract\nWe present a new deep learning ap
 proach for the analysis and processing of time series data. At the core of
  our work is the Koopman operator which fully encodes a nonlinear dynamica
 l system. Unlike the majority of Koopman-based models\, we consider dynami
 cs for which the Koopman operator is invertible. We exploit the structure 
 of these systems to design a novel Physically-Constrained Learning (PCL) m
 odel that takes into account the inverse dynamics while penalizing for inv
 erse prediction. Our architecture is composed of an autoencoder component 
 and two Koopman layers for the dynamics and their inverse. To motivate our
  network design\, we investigate the connection between invertible Koopman
  operators and pointwise maps\, and our analysis yields a loss term which 
 we employ in practice. To evaluate our work\, we consider several challeng
 ing nonlinear systems including the pendulum\, fluid flows on curved domai
 ns and real climate data. We compare our approach to several baseline meth
 ods\, and we demonstrate that it yields the best results for long time pre
 dictions and in noisy settings.\n
LOCATION:https://researchseminars.org/talk/SciDL/11/
END:VEVENT
END:VCALENDAR
