Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning
CoRR(2024)
摘要
Client selection significantly affects the system convergence efficiency and
is a crucial problem in federated learning. Existing methods often select
clients by evaluating each round individually and overlook the necessity for
long-term optimization, resulting in suboptimal performance and potential
fairness issues. In this study, we propose a novel client selection strategy
designed to emulate the performance achieved with full client participation. In
a single round, we select clients by minimizing the gradient-space estimation
error between the client subset and the full client set. In multi-round
selection, we introduce a novel individual fairness constraint, which ensures
that clients with similar data distributions have similar frequencies of being
selected. This constraint guides the client selection process from a long-term
perspective. We employ Lyapunov optimization and submodular functions to
efficiently identify the optimal subset of clients, and provide a theoretical
analysis of the convergence ability. Experiments demonstrate that the proposed
strategy significantly improves both accuracy and fairness compared to previous
methods while also exhibiting efficiency by incurring minimal time overhead.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要