Communication Efficient Framework for Decentralized Machine Learning

2020 54th Annual Conference on Information Sciences and Systems (CISS)(2020)

引用 13|浏览180
暂无评分
摘要
In this paper, we propose a fast, privacy-aware, and communication-efficient decentralized framework to solve the distributed machine learning (DML) problem. The proposed algorithm is based on the Alternating Direction Method of Multipliers (ADMM) algorithm. The key novelty in the proposed algorithm is that it solves the problem in a decentralized topology where at most half of the workers are competing the limited communication resources at any given time. Moreover, each worker exchanges the locally trained model only with two neighboring workers, thereby training a global model with a lower amount of communication overhead in each exchange. We prove that GADMM converges faster than the centralized batch gradient descent for convex loss functions, and numerically show that it converges faster and more communication-efficient than the state-of-the-art communication-efficient algorithms such as the Lazily Aggregated Gradient (LAG) and dual averaging, in linear and logistic regression tasks on synthetic and real datasets.
更多
查看译文
关键词
decentralized machine learning,distributed machine learning problem,alternating direction method of multipliers algorithm,decentralized topology,communication resources,worker exchanges,locally trained model,neighboring workers,global model,communication overhead,centralized batch gradient descent,privacy-aware communication-efficient decentralized framework,convex loss functions,linear regression tasks,logistic regression tasks,GADMM
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要