Non-perturbative Non-Lagrangian Neural Network Field Theories

Anindita Maiti (Northeastern University)

12-Sep-2022, 14:00-15:00 (19 months ago)

Abstract: Ensembles of Neural Network (NN) output functions describe field theories. The Neural Network Field Theories become free i.e. Gaussian in the limit of infinite width and independent parameter distributions, due to Central Limit Theorem (CLT). Interaction terms i.e. non-Gaussianities in these field theories arise due to violations of CLT at finite width and / or correlated parameter distributions. In general, non-Gaussianities render Neural Network Field Theories as non-perturbative and non-Lagrangian. In this talk, I will describe methods to study non-perturbative non-Lagrangian field theories in Neural Networks, via a dual framework over parameter distributions. This duality lets us study correlation functions and symmetries of NN field theories in the absence of an action; further the partition function can be approximated as a series sum over connected correlation functions. Thus, Neural Networks allow us to study non-perturbative non-Lagrangian field theories through their architectures, and can be beneficial to both Machine Learning and physics.

machine learningmathematical physicscommutative algebraalgebraic geometryalgebraic topologycombinatoricsdifferential geometrynumber theoryrepresentation theory

Audience: researchers in the topic


Machine Learning Seminar

Series comments: Online machine learning in pure mathematics seminar, typically held on Wednesday. This seminar takes place online via Zoom.

For recordings of past talks and copies of the speaker's slides, please visit the seminar homepage at: kasprzyk.work/seminars/ml.html

Organizers: Alexander Kasprzyk*, Lorenzo De Biase*, Tom Oliver, Sara Veneziale
*contact for this listing

Export talk to