Fundamental Components of Deep Learning: A category-theoretic approach

Bruno Gavranović (Strathclyde)

20-Sep-2023, 09:00-10:00 (7 months ago)

Abstract: Deep learning, despite its remarkable achievements, is still a young field. Like the early stages of many scientific disciplines, it is permeated by ad-hoc design decisions. From the intricacies of the implementation of backpropagation, through new and poorly understood phenomena such as double descent, scaling laws or in-context learning, to a growing zoo of neural network architectures - there are few unifying principles in deep learning, and no uniform and compositional mathematical foundation. In this talk I'll present a novel perspective on deep learning by utilising the mathematical framework of category theory. I'll identify two main conceptual components of neural networks, report on progress made throughout last years by the research community in formalising them, and show how they've been used to describe backpropagation, architectures, and supervised learning in general, shedding a new light on the existing field.

machine learningmathematical physicscommutative algebraalgebraic geometryalgebraic topologycombinatoricsdifferential geometrynumber theoryrepresentation theory

Audience: researchers in the topic


Machine Learning Seminar

Series comments: Online machine learning in pure mathematics seminar, typically held on Wednesday. This seminar takes place online via Zoom.

For recordings of past talks and copies of the speaker's slides, please visit the seminar homepage at: kasprzyk.work/seminars/ml.html

Organizers: Alexander Kasprzyk*, Lorenzo De Biase*, Tom Oliver, Sara Veneziale
*contact for this listing

Export talk to