Propagation Structure-Aware Graph Transformer for Robust and Interpretable Fake News Detection
020
004
DOI:
10.1145/3637528.3672024
Publication Date:
2024-08-25T04:54:55Z
AUTHORS (5)
ABSTRACT
The rise of social media has intensified fake news risks, prompting a growing focus on leveraging graph learning methods such as neural networks (GNNs) to understand post-spread patterns news. However, existing often produce less robust and interpretable results they assume that all information within the propagation is relevant item, without adequately eliminating noise from engaged users. Furthermore, inadequately capture intricate inherent in long-sequence dependencies due their use shallow GNNs aimed at avoiding over-smoothing issue, consequently diminishing overall accuracy. In this paper, we address these issues by proposing Propagation Structure-aware Graph Transformer (PSGT). Specifically, filter out users graphs, PSGT first designs noise-reduction self-attention mechanism based bottleneck principle, aiming minimize or completely remove attention links among task-irrelevant Moreover, multi-scale structures while considering features, present novel relational position encoding for Transformer, enabling model both depth distance relationships Extensive experiments demonstrate effectiveness, interpretability, robustness our PSGT.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (72)
CITATIONS (8)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....