SCAFFLSA: Quantifying and Eliminating Heterogeneity Bias in Federated Linear Stochastic Approximation and Temporal Difference Learning
CoRR(2024)
摘要
In this paper, we perform a non-asymptotic analysis of the federated linear
stochastic approximation (FedLSA) algorithm. We explicitly quantify the bias
introduced by local training with heterogeneous agents, and investigate the
sample complexity of the algorithm. We show that the communication complexity
of FedLSA scales polynomially with the desired precision ϵ, which
limits the benefits of federation. To overcome this, we propose SCAFFLSA, a
novel variant of FedLSA, that uses control variates to correct the bias of
local training, and prove its convergence without assumptions on statistical
heterogeneity. We apply the proposed methodology to federated temporal
difference learning with linear function approximation, and analyze the
corresponding complexity improvements.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要