谷歌浏览器插件
订阅小程序
在清言上使用

SMACv2: An Improved Benchmark for Cooperative Multi-Agent Reinforcement Learning

ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023)(2023)

引用 1|浏览71
暂无评分
摘要
The availability of challenging benchmarks has played a key role in the recent progress of machine learning. In cooperative multi-agent reinforcement learning, the StarCraft Multi-Agent Challenge (SMAC) has become a popular testbed for the centralised training with decentralised execution paradigm. However, after years of sustained improvement on SMAC, algorithms now achieve near-perfect performance. In this work, we conduct new analysis demonstrating that SMAC lacks the stochasticity and partial observability to require complex closed-loop policies (i.e., those that condition on the observation). In particular, we show that an open-loop policy conditioned only on the timestep can achieve non-trivial win rates for many SMAC scenarios. To address this limitation, we introduce SMACv2, a new benchmark where scenarios are procedurally generated and require agents to generalise to previously unseen settings during evaluation.2 We show that these changes ensure the benchmark requires the use of closed-loop policies. We also introduce the extended partial observability challenge (EPO), which augments SMACv2 to ensure meaningful partial observability. We evaluate state-of-the-art algorithms on SMACv2 and show that it presents significant challenges not present in the original benchmark. Our analysis illustrates that SMACv2 addresses the discovered deficiencies of SMAC and can help benchmark the next generation of MARL methods. Videos of training are available on our website.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要