What functions can Graph Neural Networks compute on random graphs? The role of Positional Encoding

Graph isomorphism Expressive power
DOI: 10.48550/arxiv.2305.14814 Publication Date: 2023-01-01
ABSTRACT
We aim to deepen the theoretical understanding of Graph Neural Networks (GNNs) on large graphs, with a focus their expressive power. Existing analyses relate this notion graph isomorphism problem, which is mostly relevant for graphs small sizes, or studied classification regression tasks, while prediction tasks nodes are far more graphs. Recently, several works showed that, very general random models, GNNs converge certains functions as number grows. In paper, we provide complete and intuitive description function space generated by equivariant node-tasks, through notions convergence that encompass previous examples. emphasize role input node features, study impact Positional Encodings (PEs), recent line work has been shown yield state-of-the-art results in practice. Through examples PEs extend previously known universality significantly models. Our hint at some normalization tricks, numerically have positive GNN generalization synthetic real data. proofs contain new concentration inequalities independent interest.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....