Mitigating Recommendation Biases via Group-Alignment and Global-Uniformity in Representation Learning
Debiasing
Representation
Regularization
DOI:
10.1145/3664931
Publication Date:
2024-05-14T10:07:53Z
AUTHORS (7)
ABSTRACT
Collaborative Filtering (CF) plays a crucial role in modern recommender systems, leveraging historical user-item interactions to provide personalized suggestions. However, CF-based methods often encounter biases due imbalances training data. This phenomenon makes tend prioritize recommending popular items and performing unsatisfactorily on inactive users. Existing works address this issue by rebalancing samples, reranking recommendation results, or making the modeling process robust bias. Despite their effectiveness, these approaches can compromise accuracy be sensitive weighting strategies, them challenging train. Therefore, exploring how mitigate remains urgent demand. In article, we deeply analyze causes effects of propose framework alleviate from perspective representation distribution, namely Group-Alignment Global-Uniformity Enhanced Representation Learning for Debiasing Recommendation (AURL). Specifically, identify two significant problems distribution users items, group-discrepancy global-collapse. These directly lead results. To end, simple but effective regularizers space, respectively named group-alignment global-uniformity. The goal is bring long-tail entities closer that entities, while global-uniformity aims preserve information as much possible evenly distributing representations. Our method optimizes both regularization terms biases. Please note AURL applies arbitrary backbones. Extensive experiments three real datasets various backbones verify superiority our proposed framework. results show not only outperforms existing debiasing models mitigating also improves performance some extent.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (66)
CITATIONS (3)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....