One-Shot Federated Learning: Theoretical Limits and Algorithms to Achieve Them

JOURNAL OF MACHINE LEARNING RESEARCH(2021)

引用 31|浏览45
暂无评分
摘要
We consider distributed statistical optimization in one-shot setting, where there are m machines each observing n i.i.d. samples. Based on its observed samples, each machine sends a B-bit-long message to a server. The server then collects messages from all machines, and estimates a parameter that minimizes an expected convex loss function. We investigate the impact of communication constraint, B, on the expected error and derive a tight lower bound on the error achievable by any algorithm. We then propose an estimator, which we call Multi-Resolution Estimator (MRE), whose expected error (when B >= d log mn where d is the dimension of parameter) meets the aforementioned lower bound up to a poly-logarithmic factor in mn. The expected error of MRE, unlike existing algorithms, tends to zero as the number of machines (m) goes to infinity, even when the number of samples per machine (n) remains upper bounded by a constant. We also address the problem of learning under tiny communication budget, and present lower and upper error bounds for the case that the budget B is a constant.
更多
查看译文
关键词
Federated learning, Distributed learning, Few shot learning, Communication efficiency, Statistical optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要