On Convex Optimal Value Functions For POSGs.
CoRR(2023)
摘要
Multi-agent planning and reinforcement learning can be challenging when
agents cannot see the state of the world or communicate with each other due to
communication costs, latency, or noise. Partially Observable Stochastic Games
(POSGs) provide a mathematical framework for modelling such scenarios. This
paper aims to improve the efficiency of planning and reinforcement learning
algorithms for POSGs by identifying the underlying structure of optimal
state-value functions. The approach involves reformulating the original game
from the perspective of a trusted third party who plans on behalf of the agents
simultaneously. From this viewpoint, the original POSGs can be viewed as Markov
games where states are occupancy states, \ie posterior probability
distributions over the hidden states of the world and the stream of actions and
observations that agents have experienced so far. This study mainly proves that
the optimal state-value function is a convex function of occupancy states
expressed on an appropriate basis in all zero-sum, common-payoff, and
Stackelberg POSGs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要