Graph Convolutional Neural Networks: The Mystery of Generalization

Gitta Kutyniok (Ludwig-Maximilians-Universität München)

11-Mar-2021, 17:00-18:00 (3 years ago)

Abstract: The tremendous importance of graph structured data due to recommender systems or social networks led to the introduction of graph convolutional neural networks (GCN). Those split into spatial and spectral GCNs, where in the later case filters are defined as elementwise multiplication in the frequency domain of a graph. Since often the dataset consists of signals defined on many different graphs, the trained network should generalize to signals on graphs unseen in the training set. One instance of this problem is the transferability of a GCN, which refers to the condition that a single filter or the entire network have similar repercussions on both graphs, if two graphs describe the same phenomenon. However, for a long time it was believed that spectral filters are not transferable.

In this talk we aim at debunking this common misconception by showing that if two graphs discretize the same continuous metric space, then a spectral filter or GCN has approximately the same repercussion on both graphs. Our analysis also accounts for large graph perturbations as well as allows graphs to have completely different dimensions and topologies, only requiring that both graphs discretize the same underlying continuous space. Numerical results then even imply that spectral GCNs are superior to spatial GCNs if the dataset consists of signals defined on many different graphs.

Mathematics

Audience: researchers in the topic


International Zoom Inverse Problems Seminar, UC Irvine

Organizers: Katya Krupchyk*, Knut Solna
*contact for this listing

Export talk to