Privacy at a Price: Exploring its Dual Impact on AI Fairness
arxiv(2024)
摘要
The worldwide adoption of machine learning (ML) and deep learning models,
particularly in critical sectors, such as healthcare and finance, presents
substantial challenges in maintaining individual privacy and fairness. These
two elements are vital to a trustworthy environment for learning systems. While
numerous studies have concentrated on protecting individual privacy through
differential privacy (DP) mechanisms, emerging research indicates that
differential privacy in machine learning models can unequally impact separate
demographic subgroups regarding prediction accuracy. This leads to a fairness
concern, and manifests as biased performance. Although the prevailing view is
that enhancing privacy intensifies fairness disparities, a smaller, yet
significant, subset of research suggests the opposite view. In this article,
with extensive evaluation results, we demonstrate that the impact of
differential privacy on fairness is not monotonous. Instead, we observe that
the accuracy disparity initially grows as more DP noise (enhanced privacy) is
added to the ML process, but subsequently diminishes at higher privacy levels
with even more noise. Moreover, implementing gradient clipping in the
differentially private stochastic gradient descent ML method can mitigate the
negative impact of DP noise on fairness. This mitigation is achieved by
moderating the disparity growth through a lower clipping threshold.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要