MONET: Modality-Embracing Graph Convolutional Network and Target-Aware Attention for Multimedia Recommendation
Modality (human–computer interaction)
DOI:
10.48550/arxiv.2312.09511
Publication Date:
2023-01-01
AUTHORS (4)
ABSTRACT
In this paper, we focus on multimedia recommender systems using graph convolutional networks (GCNs) where the multimodal features as well user-item interactions are employed together. Our study aims to exploit more effectively in order accurately capture users' preferences for items. To end, point out following two limitations of existing GCN-based systems: (L1) although interacted items by a user can reveal her items, methods utilize GCN designed only capturing collaborative signals, resulting insufficient reflection final user/item embeddings; (L2) decides whether prefer target item considering its features, represent single embedding regardless item's and then predict preference item. address above issues, propose novel system, named MONET, composed core ideas: modality-embracing (MeGCN) target-aware attention. Through extensive experiments four real-world datasets, demonstrate i) significant superiority MONET over seven state-of-the-art competitors (up 30.32% higher accuracy terms recall@20, compared best competitor) ii) effectiveness ideas MONET. All codes available at https://github.com/Kimyungi/MONET.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....