Clients Collaborate: Flexible Differentially Private Federated Learning with Guaranteed Improvement of Utility-Privacy Trade-off
CoRR(2024)
摘要
To defend against privacy leakage of user data, differential privacy is
widely used in federated learning, but it is not free. The addition of noise
randomly disrupts the semantic integrity of the model and this disturbance
accumulates with increased communication rounds. In this paper, we introduce a
novel federated learning framework with rigorous privacy guarantees, named
FedCEO, designed to strike a trade-off between model utility and user privacy
by letting clients ”Collaborate with Each Other”. Specifically, we perform
efficient tensor low-rank proximal optimization on stacked local model
parameters at the server, demonstrating its capability to flexibly truncate
high-frequency components in spectral space. This implies that our FedCEO can
effectively recover the disrupted semantic information by smoothing the global
semantic space for different privacy settings and continuous training
processes. Moreover, we improve the SOTA utility-privacy trade-off bound by an
order of √(d), where d is the input dimension. We illustrate our
theoretical results with experiments on representative image datasets. It
observes significant performance improvements and strict privacy guarantees
under different privacy settings.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要