Democratizing Large Language Models via Personalized Parameter-Efficient Fine-tuning
CoRR(2024)
摘要
Personalization in large language models (LLMs) is increasingly important,
aiming to align LLM's interactions, content, and recommendations with
individual user preferences. Recent advances in LLM personalization have
spotlighted effective prompt design, by enriching user queries with
non-parametric knowledge through behavior history retrieval and textual
profiles. However, these approaches were limited due to a lack of model
ownership, resulting in constrained customization and privacy issues. Moreover,
they often failed to accurately capture user behavior patterns, especially in
cases where user data were complex and dynamic. To address these shortcomings,
we introduce One PEFT Per User (OPPU), which employs personalized
parameter-efficient fine-tuning (PEFT) modules, to store user-specific behavior
patterns and preferences. By plugging in users' personal PEFT parameters, they
can own and use their LLMs personally. OPPU integrates parametric user
knowledge in the personal PEFT parameters with the non-parametric knowledge
acquired through retrieval and profile. This integration adapts individual LLMs
to user behavior shifts. Experimental results demonstrate that OPPU
significantly outperforms existing prompt-based methods across seven diverse
tasks in the LaMP benchmark. Further in-depth studies reveal OPPU's enhanced
capabilities in handling user behavior shifts, modeling users at different
active levels, maintaining robustness across various user history formats, and
displaying versatility with different PEFT methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要