Group invariant machine learning by fundamental domain projections

Daniel Platt (KCL)

03-May-2023, 09:00-10:00 (3 years ago)

Abstract: In many applications one wants to learn a function that is invariant under a group action. For example, classifying images of digits, no matter how they are rotated. There exist many approaches in the literature to do this. I will mention two approaches that are very useful in many applications, but struggle if the group is big or acts in a complicated way. I will then explain our approach which does not have these two problems. The approach works by finding some "canonical representative" of each input element. In the example of images of digits, one may rotate the digit so that the brightest quarter is in the top-left, which would define a "canonical representative". In the general case, one has to define what that means. Our approach is useful if the group is big, and I will present experiments on the Complete Intersection Calabi-Yau and Kreuzer-Skarke datasets to show this. Our approach is useless if the group is small, and the case of rotated images of digits is an example of this. This is joint work with Benjamin Aslan and David Sheard.

machine learningmathematical physicscommutative algebraalgebraic geometryalgebraic topologycombinatoricsdifferential geometrynumber theoryrepresentation theory

Audience: researchers in the topic


Machine Learning Seminar

Series comments: Online machine learning in pure mathematics seminar, typically held on Wednesday. This seminar takes place online via Zoom.

For recordings of past talks and copies of the speaker's slides, please visit the seminar homepage at: kasprzyk.work/seminars/ml.html

Organizers: Alexander Kasprzyk*, Lorenzo De Biase*, Tom Oliver, Sara Veneziale
*contact for this listing

Export talk to