The Power of Resets in Online Reinforcement Learning
arxiv(2024)
摘要
Simulators are a pervasive tool in reinforcement learning, but most existing
algorithms cannot efficiently exploit simulator access – particularly in
high-dimensional domains that require general function approximation. We
explore the power of simulators through online reinforcement learning with
local simulator access (or, local planning), an RL protocol where the agent
is allowed to reset to previously observed states and follow their dynamics
during training. We use local simulator access to unlock new statistical
guarantees that were previously out of reach:
- We show that MDPs with low coverability Xie et al. 2023 – a general
structural condition that subsumes Block MDPs and Low-Rank MDPs – can be
learned in a sample-efficient fashion with only Q^⋆-realizability
(realizability of the optimal state-value function); existing online RL
algorithms require significantly stronger representation conditions.
- As a consequence, we show that the notorious Exogenous Block MDP problem
Efroni et al. 2022 is tractable under local simulator access.
The results above are achieved through a computationally inefficient
algorithm. We complement them with a more computationally efficient algorithm,
RVFS (Recursive Value Function Search), which achieves provable sample
complexity guarantees under a strengthened statistical assumption known as
pushforward coverability. RVFS can be viewed as a principled, provable
counterpart to a successful empirical paradigm that combines recursive search
(e.g., MCTS) with value function approximation.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要