Graph Neural Networks Use Graphs When They Shouldn’t

With Maya Bechler Speicher, Tel-Aviv University

Graph Neural Networks Use Graphs When They Shouldn’t

Predictions over graphs play a crucial role in various domains, including social networks and medicine.
Graph Neural Networks (GNNs) have emerged as the dominant approach for learning on graph data.
Although a graph-structure is provided as input to the GNN , in some cases the best solution can be obtained by ignoring it.
While GNNs have the ability to ignore the graph-structure in such cases, it is not clear that they will.
In this talk, I will show that GNNs actually tend to overfit the given graph-structure. Namely, they use it even when a better solution can be obtained by ignoring it.
By analyzing the implicit bias of gradient-descent learning of GNNs I will show that when the ground truth function does not use the graphs, GNNs are not guaranteed to learn a solution that ignores the graph, even with infinite data.
I will prove that within the family of regular graphs, GNNs are guaranteed to extrapolate when learning with gradient descent.
Then, based on our empirical and theoretical findings, I will demonstrate on real-data how regular graphs can be leveraged to reduce graph overfitting and enhance performance.
Finally, I will present a recent novel approach, Cayley Graph Propagation, for propagating information over special types of regular graphs – the Cayley graphs of the SL(2, Zn) special linear group, to improve overfitting and information bottlenecks.

Add to your calendar or Include in your list