High-Dimension Human Value Representation in Large Language Models
arxiv(2024)
摘要
The widespread application of Large Language Models (LLMs) across various
tasks and fields has necessitated the alignment of these models with human
values and preferences. Given various approaches of human value alignment,
ranging from Reinforcement Learning with Human Feedback (RLHF), to
constitutional learning, etc. there is an urgent need to understand the scope
and nature of human values injected into these models before their release.
There is also a need for model alignment without a costly large scale human
annotation effort. We propose UniVaR, a high-dimensional representation of
human value distributions in LLMs, orthogonal to model architecture and
training data. Trained from the value-relevant output of eight multilingual
LLMs and tested on the output from four multilingual LLMs, namely LlaMA2,
ChatGPT, JAIS and Yi, we show that UniVaR is a powerful tool to compare the
distribution of human values embedded in different LLMs with different langauge
sources. Through UniVaR, we explore how different LLMs prioritize various
values in different languages and cultures, shedding light on the complex
interplay between human values and language modeling.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要