A Hybrid Data And Model Transfer Framework For Distributed Machine Learning
2019 11TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP)(2019)
摘要
Centralized model transfer is widely used in distributed machine learning frameworks, however, such frameworks assign high computation load to the central parameter server and lead to heavy communication cost. To address this issue, a novel distributed machine learning framework with decentralized instance communication and centralized model ensemble is proposed in this paper. Capitalizing on fully self-controlled computation and communication on every data node, our framework exploits the instances in a decentralized way to conduct centralized model ensemble, thus alleviating the burden of parameter server. In addition, by making tradeoff between communication cost and learning performance, a reinforcement meta-learning based communication scheme is developed. It actively and adaptively determines the instance transfer process and thus considerably reduce the communication cost. Finally, numerical results validate the good performance of our framework in terms of learning accuracy and communication cost.
更多查看译文
关键词
centralized model ensemble,learning performance,reinforcement meta-learning based communication scheme,instance transfer process,hybrid data,model transfer framework,centralized model transfer,distributed machine learning frameworks,central parameter server,distributed machine learning framework,decentralized instance communication,fully self-controlled computation,data node
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络