Stealing Secrecy from Outside: A Novel Gradient Inversion Attack in Federated Learning

2022 IEEE 28th International Conference on Parallel and Distributed Systems (ICPADS)(2023)

引用 2|浏览6
暂无评分
摘要
Knowing model parameters has been regarded as a vital factor for recovering sensitive information from the gradients in federated learning. But is it safe to use federated learning when the model parameters are unavailable for adversaries, i.e., external adversaries’ In this paper, we answer this question by proposing a novel gradient inversion attack. Speciffically, we observe a widely ignored fact in federated learning that the participants’ gradient data are usually transmitted via the intermediary node. Based on this fact, we show that an external adversary is able to recover the private input from the gradients, even if it does not have the model parameters. Through extensive experiments based on several real-world datasets, we demonstrate that our proposed new attack can recover the input with pixelwise accuracy and feasible efficiency.
更多
查看译文
关键词
gradient inversion,grey-box attack,federated learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要