MESAS: Poisoning Defense for Federated Learning Resilient against Adaptive Attackers

PROCEEDINGS OF THE 2023 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2023(2023)

引用 1|浏览20
暂无评分
摘要
Federated Learning (FL) enhances decentralized machine learning by safeguarding data privacy, reducing communication costs, and improving model performance with diverse data sources. However, FL faces vulnerabilities such as untargeted poisoning attacks and targeted backdoor attacks, posing challenges to model integrity and security. Preventing backdoors proves especially challenging due to their stealthy nature. Existing mitigation techniques have shown efficacy but often overlook realistic adversaries and diverse data distributions. This work introduces the concept of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously. Extensive empirical testing reveals existing defenses' vulnerability in this adversary model. We present Metric-Cascades (MESAS), a novel defense method tailored to more realistic scenarios and adversary models. MESAS employs multiple detection metrics simultaneously to combat poisoned model updates, posing a complex multi-objective problem for adaptive attackers. In a comprehensive evaluation across nine backdoors and three datasets, MESAS out-performs existing defenses in distinguishing backdoors from data distribution-related distortions within and across clients. MESAS offers robust defense against strong adaptive adversaries in real-world data settings, with a modest average overhead of just 24.37 seconds.
更多
查看译文
关键词
federated learning,security,poisoning attacks,backdoor attacks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要