Online Distributed Stochastic Gradient Algorithm for Nonconvex Optimization With Compressed Communication.

IEEE Transactions on Automatic Control(2024)

引用 0|浏览3
暂无评分
摘要
This paper examines an online distributed optimization problem over an unbalanced digraph, in which a group of nodes in the network try to collectively search a minimizer of a time-varying global cost function while data is distributed among computing nodes. As the problem size becomes large, it will inevitably suffer from the communication bottleneck since each node that exchanges message potentially transmits large amounts of information to its neighbors. To handle the issue, we design an online stochastic gradient algorithm with compressed communication when the knowledge of the gradient is available. We obtain the regret bounds for both non-convex and convex cost functions, which can reach almost the same order of classic distributed optimization algorithms with exact communication. To resolve the scenario when the information of gradients is not accessible, a bandit version of the previous algorithm is then proposed. Explicit regret bounds of the bandit algorithm are also established for both non-convex and convex cost functions. The result reveals that the performance of the bandit- feedback method is almost close to that of the gradient- feedback method. Several numerical experiments corroborate the main theoretical findings obtained in this paper and exemplify a remarkable speedup when compared to existing distributed algorithms with exact communication.
更多
查看译文
关键词
Bandit- feedback,compressed communication,distributed optimization,non-convex optimization,online optimization,stochastic approximation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要