Towards Efficient and Effective Unlearning of Large Language Models for Recommendation

FOS: Computer and information sciences Artificial Intelligence (cs.AI) Computer Science - Artificial Intelligence Information Retrieval (cs.IR) Computer Science - Information Retrieval
DOI: 10.48550/arxiv.2403.03536 Publication Date: 2024-03-06
ABSTRACT
The significant advancements in large language models (LLMs) give rise to a promising research direction, i.e., leveraging LLMs as recommenders (LLMRec). efficacy of LLMRec arises from the open-world knowledge and reasoning capabilities inherent LLMs. acquires recommendation through instruction tuning based on user interaction data. However, order protect privacy optimize utility, it is also crucial for intentionally forget specific data, which generally referred unlearning. In era LLMs, unlearning poses new challenges terms \textit{inefficiency} \textit{ineffectiveness}. Existing methods require updating billions parameters LLMRec, costly time-consuming. Besides, they always impact model utility during process. To this end, we propose \textbf{E2URec}, first \underline{E}fficient \underline{E}ffective \underline{U}nlearning method LLM\underline{Rec}. Our proposed E2URec enhances efficiency by only few additional LoRA parameters, improves effectiveness employing teacher-student framework, where maintain multiple teacher networks guide Extensive experiments show that outperforms state-of-the-art baselines two real-world datasets. Specifically, can efficiently data without affecting performance. source code at \url{https://github.com/justarter/E2URec}.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....