Randomized Gradient-Free Method for Multiagent Optimization Over Time-Varying Networks.

IEEE Transactions on Neural Networks and Learning Systems(2015)

引用 88|浏览44
暂无评分
摘要
In this brief, we consider the multiagent optimization over a network where multiple agents try to minimize a sum of nonsmooth but Lipschitz continuous functions, subject to a convex state constraint set. The underlying network topology is modeled as time varying. We propose a randomized derivative-free method, where in each update, the random gradient-free oracles are utilized instead of the subg...
更多
查看译文
关键词
Optimization,Convergence,Smoothing methods,Linear programming,Vectors,Network topology,Learning systems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要