MMKRL: A robust embedding approach for multi-modal knowledge graph representation learning
Knowledge graph
Robustness
Feature Learning
Representation
DOI:
10.1007/s10489-021-02693-9
Publication Date:
2021-09-29T12:03:16Z
AUTHORS (5)
ABSTRACT
Most knowledge representation learning (KRL) methods only use structured knowledge graphs (KGs); however, there is still much multi-modal (textual, visual) knowledge that has not been used. To address this challenge, we propose a novel solution called multi-modal knowledge representation learning (MMKRL) to take advantage of multi-source (structured, textual, and visual) knowledge. Instead of simply integrating multi-modal knowledge with structured knowledge in a unified space, we introduce a component alignment scheme and combine it with translation methods to accomplish multi-modal KRL. Specifically, MMKRL firstly reconstructs multi-source knowledge by summing different plausibility functions and then aligns multi-source knowledge using specific norm constraints to reduce reconstruction errors. We also select an adversarial training strategy to enhance the robustness of MMKRL, which is rarely considered in existing multi-modal KRL methods. Experimental results show that MMKRL can effectively utilize multi-modal knowledge to achieve better link prediction and triple classification than other baselines in two widely used datasets. Further, when relying on structured knowledge or limited multi-source knowledge, MMKRL still achieves competitive results in link prediction, demonstrating our model’s superiority.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (45)
CITATIONS (22)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....