FairPair: A Robust Evaluation of Biases in Language Models through Paired Perturbations
arxiv(2024)
摘要
The accurate evaluation of differential treatment in language models to
specific groups is critical to ensuring a positive and safe user experience. An
ideal evaluation should have the properties of being robust, extendable to new
groups or attributes, and being able to capture biases that appear in typical
usage (rather than just extreme, rare cases). Relatedly, bias evaluation should
surface not only egregious biases but also ones that are subtle and
commonplace, such as a likelihood for talking about appearances with regard to
women. We present FairPair, an evaluation framework for assessing differential
treatment that occurs during ordinary usage. FairPair operates through
counterfactual pairs, but crucially, the paired continuations are grounded in
the same demographic group, which ensures equivalent comparison. Additionally,
unlike prior work, our method factors in the inherent variability that comes
from the generation process itself by measuring the sampling variability. We
present an evaluation of several commonly used generative models and a
qualitative analysis that indicates a preference for discussing family and
hobbies with regard to women.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要