BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Alejandro Queiruga (Google\, LLC)
DTSTART:20200701T162500Z
DTEND:20200701T165000Z
DTSTAMP:20260423T040116Z
UID:SciDL/4
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/SciDL/4/">Co
 ntinuous-in-Depth Neural Networks through Interpretation of Learned Dynami
 cs</a>\nby Alejandro Queiruga (Google\, LLC) as part of Workshop on Scient
 ific-Driven Deep Learning (SciDL)\n\n\nAbstract\nData-driven learning of d
 ynamical systems is of interest to the scientific community\, which wants 
 to recover information about the true physics from the discretized model\,
  and the machine learning community\, which wants to improve model interpr
 etability and performance. We present a refined interpretation of learned 
 dynamical models by investigating canonical systems. Recent ML literature 
 draws a metaphor between residual components of neural networks and a forw
 ard Euler time integrator\, but we show that these components actually lea
 rn a more accurate integrator. We examine\, the harmonic oscillator\, 1D w
 ave equation\, and the pendulum in two forms\, using purely linear models\
 , feed-forward shallow neural networks\, and neural networks embedded in t
 ime integrators. Each of the model configurations overfit to a better oper
 ator than commonly understood\, confounding recovery of physics and attemp
 ts to improve the algorithms. We show two analytical methods for reconstru
 cting underlying operators from linear systems. For the nonlinear problems
 \, unmodified neural networks outperform the expected numerical methods\, 
 but do not allow for inspection or generalization. Embedding the models in
  integrators such as RK4 improves performance and generalizability. Howeve
 r\, for the constrained pendulum\, the model is still better than excepted
 \, exhibiting better than expected stiffness-stability. We conclude by rev
 isiting the components of neural networks where improvements are suggested
 .\n
LOCATION:https://researchseminars.org/talk/SciDL/4/
END:VEVENT
END:VCALENDAR
