FairRR: Pre-Processing for Group Fairness through Randomized Response
International Conference on Artificial Intelligence and Statistics(2024)
摘要
The increasing usage of machine learning models in consequential
decision-making processes has spurred research into the fairness of these
systems. While significant work has been done to study group fairness in the
in-processing and post-processing setting, there has been little that
theoretically connects these results to the pre-processing domain. This paper
proposes that achieving group fairness in downstream models can be formulated
as finding the optimal design matrix in which to modify a response variable in
a Randomized Response framework. We show that measures of group fairness can be
directly controlled for with optimal model utility, proposing a pre-processing
algorithm called FairRR that yields excellent downstream model utility and
fairness.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要