Task-Prompt Generalised World Model in Multi-Environment Offline Reinforcement Learning.

ECAI 2023(2023)

引用 0|浏览24
暂无评分
摘要
Offline reinforcement learning (RL) circumvents costly interactions with the environment by utilising historical trajectories. Incorporating a world model into this method could substantially enhance the transfer performance of various tasks without expensive calculations from scratch. However, due to the complexity arising from different types of generalisation, previous works have focused almost exclusively on single-environment tasks. In this study, we introduce a multi-environment offline RL setting to investigate whether a generalised world model can be learned from large, diverse datasets and serve as a good surrogate for policy learning in different tasks. Inspired by the success of multi-task prompt methods, we propose the Task-prompt Generalised World Model (TGW) framework, which demonstrates notable performance in this setting. TGW comprises three modules: a task-state prompter, a generalised dynamics module, and a reward module. We implement the generalised dynamics module as a transformer-based recurrent state-space model (TransRSSM) and employ prompts to provide task-specific instructions, enabling TGW to address the internal stochasticity of the generalised world model. On the MuJoCo control benchmarks, TGW significantly outperforms previous offline RL algorithms in multi-environment setting.
更多
查看译文
关键词
reinforcement,offline,world,task-prompt,multi-environment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要