Contribution-Wise Byzantine-Robust Aggregation for Class-Balanced Federated Learning

Information Sciences(2024)

引用 0|浏览3
暂无评分
摘要
Federated learning (FL) is a promising approach that allows many clients to jointly train a model without sharing the raw data. Due to the clients' different preferences, the class imbalance issue frequently occurs in real-world FL problems and poses threats for poisoning attacks to the existing FL methods. In this work, we first propose a new attack called Class Imbalance Attack that can degrade the testing accuracy of a particular class(es) to even 0 under the state-of-the-art robust FL methods. To defend against such attacks, we further propose a Class-Balanced FL method with a novel contribution-wise Byzantine-robust aggregation rule. In the designed rule, an honest score and a contribution score will be assigned to each client dynamically according to the server model. The server itself will be initiated with a small dataset, and a model (called server model) will be maintained. These two scores will be subsequently used to calculate the weighted average of the client gradients for each training iteration. The experiments are conducted on five datasets against state-of-the-art poisoning attacks, including the Class Imbalance Attack. The empirical results demonstrate the effectiveness of the proposed Class-Balanced FL method.
更多
查看译文
关键词
Federated Learning (FL),Poisoning Attack,Byzantine-Robust Aggregation,Adversarial Machine Learning,Non-Independent Identical (Non-IID)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要