MARL-LNS: Cooperative Multi-agent Reinforcement Learning via Large Neighborhoods Search
CoRR(2024)
摘要
Cooperative multi-agent reinforcement learning (MARL) has been an
increasingly important research topic in the last half-decade because of its
great potential for real-world applications. Because of the curse of
dimensionality, the popular "centralized training decentralized execution"
framework requires a long time in training, yet still cannot converge
efficiently. In this paper, we propose a general training framework, MARL-LNS,
to algorithmically address these issues by training on alternating subsets of
agents using existing deep MARL algorithms as low-level trainers, while not
involving any additional parameters to be trained. Based on this framework, we
provide three algorithm variants based on the framework: random large
neighborhood search (RLNS), batch large neighborhood search (BLNS), and
adaptive large neighborhood search (ALNS), which alternate the subsets of
agents differently. We test our algorithms on both the StarCraft Multi-Agent
Challenge and Google Research Football, showing that our algorithms can
automatically reduce at least 10
final skill level as the original algorithm.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要