AdaFed: Optimizing Participation-Aware FederatedLearning with Adaptive Aggregation Weights

IEEE Transactions on Network Science and Engineering(2022)

引用 4|浏览9
暂无评分
摘要
Federated learning (FL) has become one of the mainstream paradigms for multi-party collaborative learning with privacy protection. As it is difficult to guarantee all FL devices to be active simultaneously, a common approach is to only use a partial set of devices to participate in each round of model training. However, such partial device participation may introduce significant bias on the trained model. In this paper, we first conduct a theoretical analysis to investigate the negative impact of biased device participation and derive the convergence rate of FedAvg, the most well-known FL algorithm, under biased device participation. We further propose an optimized participation-aware federated learning algorithm called AdaFed , which can adaptively tune the aggregation weight of each device based on its historical participation records and remove the bias introduced by partial device participation. To be more rigorous, we formally prove the convergence guarantee of AdaFed. Finally, we conduct trace-driven experiments to validate the effectiveness of our proposed algorithm. The experimental results are consistent with our theoretical analysis and show that AdaFed improves the global model accuracy and converges much faster than the state-of-the-art FL algorithms by eliminating the negative effect of biased device participation.
更多
查看译文
关键词
Federated learning,adaptive aggregation weights,convergence analysis,biased device participation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要