CELA: Cost-Efficient Language Model Alignment for CTR Prediction

FOS: Computer and information sciences 68T07 Information Retrieval (cs.IR) Computer Science - Information Retrieval
DOI: 10.48550/arxiv.2405.10596 Publication Date: 2024-05-17
ABSTRACT
Click-Through Rate (CTR) prediction holds a paramount position in recommender systems. The prevailing ID-based paradigm underperforms cold-start scenarios due to the skewed distribution of feature frequency. Additionally, utilization single modality fails exploit knowledge contained within textual features. Recent efforts have sought mitigate these challenges by integrating Pre-trained Language Models (PLMs). They design hard prompts structure raw features into text for each interaction and then apply PLMs processing. With external reasoning capabilities, extract valuable information even cases sparse interactions. Nevertheless, compared models, pure modeling degrades efficacy collaborative filtering, as well scalability efficiency during both training inference. To address issues, we propose \textbf{C}ost-\textbf{E}fficient \textbf{L}anguage Model \textbf{A}lignment (\textbf{CELA}) CTR prediction. CELA incorporates language models while preserving filtering capabilities models. This model-agnostic framework can be equipped with plug-and-play features, item-level alignment enhancing maintaining inference efficiency. Through extensive offline experiments, demonstrates superior performance state-of-the-art methods. Furthermore, an online A/B test conducted on industrial App system showcases its practical effectiveness, solidifying potential real-world applications CELA.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....