DGC: Training Dynamic Graphs with Spatio-Temporal Non-Uniformity using Graph Partitioning by Chunks

Graph partition Speedup Testbed
DOI: 10.48550/arxiv.2309.03523 Publication Date: 2023-01-01
ABSTRACT
Dynamic Graph Neural Network (DGNN) has shown a strong capability of learning dynamic graphs by exploiting both spatial and temporal features. Although DGNN recently received considerable attention AI community various models have been proposed, building distributed system for efficient training is still challenging. It well recognized that how to partition the graph assign workloads multiple GPUs plays critical role in acceleration. Existing works into snapshots or sequences, which only work when uniform spatio-temporal structures. However, practice are not uniformly structured, with some being very dense while others sparse. To address this issue, we propose DGC, achieves 1.25x - 7.52x speedup over state-of-the-art our testbed. DGC's success stems from new partitioning method partitions chunks, essentially subgraphs modest few inter connections. This algorithm based on coarsening, can run fast large graphs. In addition, DGC highly run-time, powered proposed chunk fusion adaptive stale aggregation techniques. Extensive experimental results 3 typical 4 popular datasets presented show effectiveness DGC.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....