## Graph Neural Networks Use Graphs When They Shouldn’t

### With Maya Bechler Speicher, Tel-Aviv University

# Graph Neural Networks Use Graphs When They Shouldn’t

Predictions over graphs play a crucial role in various domains, including social networks and medicine.

Graph Neural Networks (GNNs) have emerged as the dominant approach for learning on graph data.

Although a graph-structure is provided as input to the GNN , in some cases the best solution can be obtained by ignoring it.

While GNNs have the ability to ignore the graph-structure in such cases, it is not clear that they will.

In this talk, I will show that GNNs actually tend to overfit the given graph-structure. Namely, they use it even when a better solution can be obtained by ignoring it.

By analyzing the implicit bias of gradient-descent learning of GNNs I will show that when the ground truth function does not use the graphs, GNNs are not guaranteed to learn a solution that ignores the graph, even with infinite data.

I will prove that within the family of regular graphs, GNNs are guaranteed to extrapolate when learning with gradient descent.

Then, based on our empirical and theoretical findings, I will demonstrate on real-data how regular graphs can be leveraged to reduce graph overfitting and enhance performance.
Finally, I will present a recent novel approach, Cayley Graph Propagation, for propagating information over special types of regular graphs – the Cayley graphs of the SL(2, Zn) special linear group, to improve overfitting and information bottlenecks.

- Speaker: Maya Bechler Speicher, Tel-Aviv University
- Friday 01 November 2024, 13:00–14:00
- Venue: MR2 Centre for Mathematical Sciences.
- Series: Cambridge Image Analysis Seminars; organiser: Ferdia Sherry.