Self-Explainable Graph Transformer for Link Sign Prediction

Link (geometry)
DOI: 10.1609/aaai.v39i11.33316 Publication Date: 2025-04-11T12:03:19Z
ABSTRACT
Signed Graph Neural Networks (SGNNs) have been shown to be effective in analyzing complex patterns real-world situations where positive and negative links coexist. However, SGNN models suffer from poor explainability, which limit their adoptions critical scenarios that require understanding the rationale behind predictions. To best of our knowledge, there is currently no research work on explainability models. Our goal address decision-making for downstream task link sign prediction specific signed graph neural networks. Since post-hoc explanations are not derived directly models, they may biased misrepresent true explanations. Therefore, this paper we introduce a Self-Explainable transformer (SE-SGformer) framework, can only outputs explainable information while ensuring high accuracy. Specifically, propose new Transformer architecture graphs theoretically demonstrate using positional encoding based random walks has greater expressive power than current methods other Transformer-based approaches. We construct novel decision process by discovering K-nearest (farthest) (negative) neighbors node replace network-based decoder predicting edge signs. These K represent crucial about formation edges between nodes thus serve as important explanatory process. conducted experiments several datasets validate effectiveness SE-SGformer, outperforms state-of-the-art improving 2.2% accuracy 73.1% explainablity best-case scenario.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....