FLAP: Federated Learning with Attack and Privacy Awareness

crossref(2022)

引用 0|浏览1
暂无评分
摘要
Federated learning provides data privacy protection by keeping data used for clients' machine learning training private, and only sending model parameters updates to the centralised server/aggregator. However, the federated learning framework is still vulnerable to various attacks, such as data poisoning, launched by malicious/compromised clients. Cautious clients participating in federated learning, on the other hand, employ privacy protection techniques such as differential privacy to keep their model updates safe from inference attacks launched by the centralised aggregator. An aggregator thus needs to employ techniques to differentiate between model updates from benign, malicious and cautious clients, and to mitigate the effects of updates from clients other than benign clients. To reach this goal, we propose a novel federated learning system called FLAP which is robust against attacks launched by malicious clients and privacy protections employed by cautious clients.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要