Byzantine Tolerant Algorithms for Federated Learning

IEEE Transactions on Network Science and Engineering(2023)

引用 0|浏览4
暂无评分
摘要
In federated learning, workers periodically upload locally computed weights to a federated learning server (FL server). When Byzantine attacks are presented in the system, attacked workers may upload incorrect weights to the parameter server, i.e., the information received by the FL server is not always the true values computed by workers. Previously proposed score-based, median-based, and distance-based defense algorithms made the following assumptions unrealistic in federated learning: (1) the dataset on each worker is independent and identically distributed (i.i.d.), and (2) the majority of all participating workers are honest. In federated learning, however, a worker may keep its non-i.i.d. private dataset and malicious workers may take over the majority in some iterations. In this paper, we focus on model poisoning type Byzantine attack and propose a novel reference dataset based algorithm along with a practical Two-Filter algorithm (ToFi) to defend against Byzantine attacks in federated learning. Our experiments highlight the effectiveness of our algorithm compared with previous algorithms in different settings.
更多
查看译文
关键词
Byzantine fault tolerant,convergence,deep learning,federated Learning,neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要