Proving Unfairness of Decision Making Systems Without Model Access

SSRN Electronic Journal(2022)

引用 0|浏览2
暂无评分
摘要
The problem of guaranteeing the fairness of automatic decision making systems has become a topic of considerable interest. Many competing definitions of fairness have been proposed, as well as methods aiming to achieve or approximate them while maintaining the ability to train useful models. The complimentary question of testing the fairness of an existing predictor is important both to the creators of machine learning systems, and to users. More specifically, it is important for users to be able to prove that an unfair system that affects them is indeed unfair, even when full and direct access to the system internals is denied. In this paper, we propose a framework that enables us to prove the unfairness of predictors which have known accuracy properties, without direct access to the model, the features it is based on, or even individual predictions. To do so, we analyze the fairness-accuracy trade-off under the definition of demographic parity. We develop an information-theoretic method that uses only an external dataset containing the protected attributes and the targets and provides a bound on the accuracy of any fair model that predicts the same targets, regardless of the features it is based on. The result is an algorithm that enables proof of unfairness, with absolutely no cooperation from the system owners.
更多
查看译文
关键词
Machine learning,Fairness,Information theory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要