Distributed Minmax Strategy for Multiplayer Games: Stability, Robustness, and Algorithms

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS(2024)

引用 2|浏览21
暂无评分
摘要
This article studies a distributed minmax strategy for multiplayer games and develops reinforcement learning (RL) algorithms to solve it. The proposed minmax strategy is distributed, in the sense that it finds each player's optimal control policy without knowing all the other players' policies. Each player obtains its distributed control policy by solving a distributed algebraic Riccati equation in a multiplayer noncooperative game. This policy is found against the worst policies of all the other players. We guarantee the existence of distributed minmax solutions and study their $\mathcal {L}_{2}$ and asymptotic stabilities. Under mild conditions, the resulting minmax control policies are shown to improve robust gain and phase margins of multiplayer systems compared to the standard linear-quadratic regulator controller. Distributed minmax solutions are found using both model-based policy iteration and data-driven off-policy RL algorithms. Simulation examples verify the proposed formulation and its computational efficiency over the nondistributed Nash solutions.
更多
查看译文
关键词
Games,Nash equilibrium,Robustness,Optimal control,Computational modeling,Asymptotic stability,Costs,Distributed solution,minmax control,multiplayer games,reinforcement learning (RL),robustness margins
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要