Simple Path Structural Encoding for Graph Transformers
FOS: Computer and information sciences
Computer Science - Machine Learning
Artificial Intelligence (cs.AI)
Computer Science - Artificial Intelligence
Machine Learning (cs.LG)
DOI:
10.48550/arxiv.2502.09365
Publication Date:
2025-02-13
AUTHORS (5)
ABSTRACT
Graph transformers extend global self-attention to graph-structured data, achieving notable success in graph learning. Recently, random walk structural encoding (RWSE) has been found further enhance their predictive power by both and positional information into the edge representation. However, RWSE cannot always distinguish between edges that belong different local patterns, which reduces its ability capture full complexity of graphs. This work introduces Simple Path Structural Encoding (SPSE), a novel method utilizes simple path counts for encoding. We show theoretically experimentally SPSE overcomes limitations RWSE, providing richer representation structures, particularly capturing cyclic patterns. To make computationally tractable, we propose an efficient approximate algorithm counting. demonstrates significant performance improvements over on various benchmarks, including molecular long-range datasets, statistically gains discriminative tasks. These results pose as powerful alternative enhancing expressivity transformers.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....