BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Taco Cohen (Qualcomm AI Research)
DTSTART:20200506T160000Z
DTEND:20200506T173000Z
DTSTAMP:20260422T225703Z
UID:PhysicsMeetsML/1
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/PhysicsMeets
 ML/1/">Natural Graph Networks</a>\nby Taco Cohen (Qualcomm AI Research) as
  part of Physics ∩ ML\n\n\nAbstract\nMessage passing algorithms are the 
 core of most neural networks that process information on graphs. Conventio
 nally\, such methods are invariant under permutation of the messages and h
 ence forget how the information flows through the network. Analyzing the l
 ocal symmetries of the graph\, we show that a more general message passing
  network can in fact be sensitive the flow of information by using differe
 nt kernels on different edges. This leads to an equivariant message passin
 g algorithm that is more expressive than conventional invariant message pa
 ssing\, overcoming fundamental limitations of the latter. We derive the we
 ight sharing and kernel constraints by modelling the symmetries using elem
 entary category theory and show that equivariant kernels are “just” na
 tural transformations between two functors. This general formulation\, whi
 ch we call Natural Networks\, gives a unified theory to model many distinc
 t forms of equivariant neural networks.\n
LOCATION:https://researchseminars.org/talk/PhysicsMeetsML/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Phiala Shanahan (MIT)
DTSTART:20200520T160000Z
DTEND:20200520T173000Z
DTSTAMP:20260422T225703Z
UID:PhysicsMeetsML/2
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/PhysicsMeets
 ML/2/">Building symmetries into generative flow models</a>\nby Phiala Shan
 ahan (MIT) as part of Physics ∩ ML\n\n\nAbstract\nI will discuss recent 
 work to incorporate symmetries\, in particular gauge symmetries (local sym
 metry transformations that form Lie groups)\, into generative flow models.
  This work is motivated by the applications of generative models for physi
 cs simulation\, in particular for lattice field theory.\n
LOCATION:https://researchseminars.org/talk/PhysicsMeetsML/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ard Louis (Oxford)
DTSTART:20200603T160000Z
DTEND:20200603T173000Z
DTSTAMP:20260422T225703Z
UID:PhysicsMeetsML/3
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/PhysicsMeets
 ML/3/">Why do neural networks generalise in the overparameterised regime?<
 /a>\nby Ard Louis (Oxford) as part of Physics ∩ ML\n\n\nAbstract\nOne of
  the most surprising properties of deep neural networks (DNNs) is that the
 y typically perform best in the overparameterised regime. Physicists are t
 aught from a young age that having more parameters than datapoints is a te
 rrible idea. This intuition can be formalised in standard learning theory 
 approaches\, based for example on model capacity\, which also predict that
  DNNs should heavily over-fit in this regime\, and therefore not generalis
 e at all. So why do DNNs work so well? We use a version of the coding theo
 rem from Algorithmic Information Theory to argue that DNNs are generically
  biased towards simple solutions. Such an inbuilt Occam’s razor means th
 at they are biased towards solutions that typically generalise well. We fu
 rther explore the interplay between this simplicity bias and the error spe
 ctrum on a dataset to develop a detailed Bayesian theory of training and g
 eneralisation that explains why and when SGD trained DNNs generalise\, and
  when they should not. This picture also allows us to derive tight PAC-Bay
 es bounds that closely track DNN learning curves and can be used to ration
 alise differences in performance across architectures. Finally\, we will d
 iscuss some deep analogies between the way DNNs explore function space\, a
 nd biases in the arrival of variation that explain certain trends observed
  in biological evolution.\n
LOCATION:https://researchseminars.org/talk/PhysicsMeetsML/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Koji Hashimoto (Osaka University)
DTSTART:20200617T160000Z
DTEND:20200617T173000Z
DTSTAMP:20260422T225703Z
UID:PhysicsMeetsML/4
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/PhysicsMeets
 ML/4/">Deep learning and quantum gravity</a>\nby Koji Hashimoto (Osaka Uni
 versity) as part of Physics ∩ ML\n\n\nAbstract\nFormulating quantum grav
 ity is one of the final goals of fundamental physics. Recent progress in s
 tring theory brought a concrete formulation called AdS/CFT correspondence\
 , in which a gravitational spacetime emerges from lower-dimensional non gr
 avitational quantum systems\, but we still lack in understanding how the c
 orrespondence works. I discuss similarities between the quantum gravity an
 d deep learning architecture\, by regarding the neural network as a discre
 tized spacetime. In particular\, the questions such as\, when\, why and ho
 w a neural network can be a space or a spacetime\, may lead to a novel way
  to look at machine learning. I implement concretely the AdS/CFT framework
  into a deep learning architecture\, and show the emergence of a curved sp
 acetime as a neural network\, from a given teacher data of quantum systems
 .\n
LOCATION:https://researchseminars.org/talk/PhysicsMeetsML/4/
END:VEVENT
END:VCALENDAR
