Adversarial Attack against Cross-lingual Knowledge Graph Alignment
Vulnerability
Maximization
DOI:
10.18653/v1/2021.emnlp-main.432
Publication Date:
2021-12-17T03:56:42Z
AUTHORS (9)
ABSTRACT
Recent literatures have shown that knowledge graph (KG) learning models are highly vulnerable to adversarial attacks. However, there is still a paucity of vulnerability analyses cross-lingual entity alignment under This paper proposes an attack model with two novel techniques perturb the KG structure and degrade quality deep alignment. First, density maximization method employed hide attacked entities in dense regions KGs, such derived perturbations unnoticeable. Second, signal amplification developed reduce gradient vanishing issues process attacks for further improving effectiveness.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (6)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....