Reinforcement Learning of the Prediction Horizon in Model Predictive Control

IFAC-PapersOnLine(2021)

引用 12|浏览2
暂无评分
摘要
Model predictive control (MPC) is a powerful trajectory optimization control technique capable of controlling complex nonlinear systems while respecting system constraints and ensuring safe operation. The MPC’s capabilities come at the cost of a high online computational complexity, the requirement of an accurate model of the system dynamics, and the necessity of tuning its parameters to the specific control application. The main tunable parameter affecting the computational complexity is the prediction horizon length, controlling how far into the future the MPC predicts the system response and thus evaluates the optimality of its computed trajectory. A longer horizon generally increases the control performance, but requires an increasingly powerful computing platform, excluding certain control applications. The performance sensitivity to the prediction horizon length varies over the state space, and this motivated adaptive horizon model predictive control (AHMPC), which adapts the prediction horizon according to some criteria. In this paper we propose to learn the optimal prediction horizon as a function of the state using reinforcement learning (RL). We show how the RL learning problem can be formulated and test our method on two control tasks — showing clear improvements over the fixed horizon MPC scheme — while requiring only minutes of learning.
更多
查看译文
关键词
Adaptive horizon model predictive control,Reinforcement learning control
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要