Convergence Analysis of Split Federated Learning on Heterogeneous Data
CoRR(2024)
摘要
Split federated learning (SFL) is a recent distributed approach for
collaborative model training among multiple clients. In SFL, a global model is
typically split into two parts, where clients train one part in a parallel
federated manner, and a main server trains the other. Despite the recent
research on SFL algorithm development, the convergence analysis of SFL is
missing in the literature, and this paper aims to fill this gap. The analysis
of SFL can be more challenging than that of federated learning (FL), due to the
potential dual-paced updates at the clients and the main server. We provide
convergence analysis of SFL for strongly convex and general convex objectives
on heterogeneous data. The convergence rates are O(1/T) and
O(1/√(T)), respectively, where T denotes the total number of rounds
for SFL training. We further extend the analysis to non-convex objectives and
where some clients may be unavailable during training. Numerical experiments
validate our theoretical results and show that SFL outperforms FL and split
learning (SL) when data is highly heterogeneous across a large number of
clients.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要