Comparing Model-free and Model-based Algorithms for Offline Reinforcement Learning

IFAC-PapersOnLine(2022)

引用 2|浏览4
暂无评分
摘要
Offline reinforcement learning (RL) Algorithms are often designed with environments such as MuJoCo in mind, in which the planning horizon is extremely long and no noise exists. We compare model-free, model-based, as well as hybrid offline RL approaches on various industrial benchmark (IB) datasets to test the algorithms in settings closer to real world problems, including complex noise and partially observable states. We find that on the IB, hybrid approaches face severe difficulties and that simpler algorithms, such as rollout based algorithms or model-free algorithms with simpler regularizers perform best on the datasets.
更多
查看译文
关键词
Reinforcement Learning,Offline RL,Model-free,Model-based,Industrial AI
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要