To Copy Rather Than Memorize: A Vertical Learning Paradigm for Knowledge Graph Completion
Memorization
Generality
DOI:
10.48550/arxiv.2305.14126
Publication Date:
2023-01-01
AUTHORS (11)
ABSTRACT
Embedding models have shown great power in knowledge graph completion (KGC) task. By learning structural constraints for each training triple, these methods implicitly memorize intrinsic relation rules to infer missing links. However, this paper points out that the multi-hop are hard be reliably memorized due inherent deficiencies of such implicit memorization strategy, making embedding underperform predicting links between distant entity pairs. To alleviate problem, we present Vertical Learning Paradigm (VLP), which extends by allowing explicitly copy target information from related factual triples more accurate prediction. Rather than solely relying on memory, VLP directly provides additional cues improve generalization ability models, especially link prediction significantly easier. Moreover, also propose a novel relative distance based negative sampling technique (ReD) effective optimization. Experiments demonstrate validity and generality our proposals two standard benchmarks. Our code is available at https://github.com/rui9812/VLP.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....