Accel-GCN: High-Performance GPU Accelerator Design for Graph Convolution Networks

Graph partition
DOI: 10.48550/arxiv.2308.11825 Publication Date: 2023-01-01
ABSTRACT
Graph Convolutional Networks (GCNs) are pivotal in extracting latent information from graph data across various domains, yet their acceleration on mainstream GPUs is challenged by workload imbalance and memory access irregularity. To address these challenges, we present Accel-GCN, a GPU accelerator architecture for GCNs. The design of Accel-GCN encompasses: (i) lightweight degree sorting stage to group nodes with similar degree; (ii) block-level partition strategy that dynamically adjusts warp sizes, enhancing shared locality balance, reducing metadata overhead compared designs like GNNAdvisor; (iii) combined improves coalescing computational parallelism the column dimension dense matrices. Utilizing principles, formulated kernel sparse matrix multiplication (SpMM) GCNs employs partitioning strategy. This approach augments performance multi-level efficiency optimizes bandwidth exploiting alignment. Evaluation 18 benchmark graphs reveals it outperforms cuSPARSE, GNNAdvisor, graph-BLAST factors 1.17 times, 1.86 2.94 times respectively. results underscore as an effective solution GCN efficiency.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....