Towards robust and privacy-preserving federated learning in edge computing

Computer Networks(2024)

引用 0|浏览2
暂无评分
摘要
Federated learning (FL) has recently emerged as an attractive distributed machine learning paradigm for harnessing the distributed data in edge computing. Its salient feature is that the individual datasets can stay local all the time during the training process and only model updates need to be exchanged for aggregation. Despite being intriguing, FL is also known to be confronted with critical security and privacy concerns. Firstly, even sharing the model updates/gradients can incur privacy leakages of the local datasets. Secondly, there could be malicious clients who may attempt to launch poisoning attacks so as to compromise the utility of trained models. Driven by such security challenges, various research efforts have been proposed to secure FL. However, most existing works have just considered either privacy preservation or robustness against poisoning attacks. In this paper, we propose a new robust and privacy-preserving FL framework RoPPFL for edge computing applications, which supports hierarchical federated learning with privacy preservation as well as robust aggregation against poisoning attacks. RoPPFL delicately bridges local differential privacy for privacy protection and similarity-based robust aggregation for resistance of malicious clients. We formally analyze the convergence and privacy guarantees of RoPPFL. Extensive experiments demonstrate the superior performance of RoPPFL.
更多
查看译文
关键词
Edge computing,Hierarchical federated learning,Poisoning attack,Differential privacy,Robustness
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要