Optimization of the Model Predictive Control Update Interval Using Reinforcement Learning

IFAC-PapersOnLine(2021)

引用 2|浏览1
暂无评分
摘要
In control applications there is often a compromise that needs to be made with respect to the complexity and performance of the controller, and the computational resources that are available. For instance, the typical hardware platform in embedded control applications is a microcontroller with limited memory and processing power, and for battery powered applications the control system can account for a significant portion of the energy consumption. We propose a controller architecture in which the computational cost is explicitly optimized along with the control objective. This is achieved by a three-part architecture where a high-level, computationally expensive controller generates plans, which a computationally simpler controller executes by compensating for prediction errors, while a recomputation policy decides when the plan should be recomputed. In this paper, we employ model predictive control (MPC) as the high-level plan-generating controller, a linear state feedback controller as the simpler compensating controller, and reinforcement learning (RL) to learn the recomputation policy. Simulation results for the classic control task of balancing an inverted pendulum show that not only is the total processor time reduced by 60% — the RL policy is even able to uncover a non-trivial synergistic relationship between the MPC and the state feedback controller - improving the control performance by 20% over the MPC alone.
更多
查看译文
关键词
model predictive control,reinforcement learning,event-driven control
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要