Optimal gradient tracking for decentralized optimization

arxiv(2023)

引用 1|浏览5
暂无评分
摘要
In this paper, we focus on solving the decentralized optimization problem of minimizing the sum of n objective functions over a multi-agent network. The agents are embedded in an undirected graph where they can only send/receive information directly to/from their immediate neighbors. Assuming smooth and strongly convex objective functions, we propose an Optimal Gradient Tracking ( OGT ) method that achieves the optimal gradient computation complexity O( √(κ)log1/ϵ) and the optimal communication complexity O( √(κ/θ)log1/ϵ) simultaneously, where κ and 1/θ denote the condition numbers related to the objective functions and the communication graph, respectively. To our best knowledge, OGT is the first single-loop decentralized gradient-type method that is optimal in both gradient computation and communication complexities. The development of OGT involves two building blocks that are also of independent interest. The first one is another new decentralized gradient tracking method termed “Snapshot” Gradient Tracking ( SS-GT ), which achieves the gradient computation and communication complexities of O( √(κ)log1/ϵ) and O( √(κ)/θlog1/ϵ) , respectively. SS-GT can be potentially extended to more general settings compared to OGT . The second one is a technique termed Loopless Chebyshev Acceleration (LCA), which can be implemented “looplessly” but achieves a similar effect by adding multiple inner loops of Chebyshev acceleration in the algorithm. In addition to SS-GT , this LCA technique can accelerate many other gradient tracking based methods with respect to the graph condition number 1/θ .
更多
查看译文
关键词
Decentralized optimization,Convex optimization,Accelerated gradient method,Randomized algorithm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要