Curriculumbased Reinforcement Learning for Distribution System Critical Load Restoration

arxiv(2023)

引用 3|浏览4
暂无评分
摘要
This paper focuses on the critical load restoration problem in distribution systems following major outages. To provide fast online response and optimal sequential decision-making support, a reinforcement learning (RL) based approach is proposed to optimize the restoration. Due to the complexities stemming from the large policy search space, renewable uncertainty, and nonlinearity in a complex grid control problem, directly applying RL algorithms to train a satisfactory policy requires extensive tuning to be successful. To address this challenge, this paper leverages the curriculum learning (CL) technique to design a training curriculum involving a simpler steppingstone problem that guides the RL agent to learn to solve the original hard problem in a progressive and more efficient manner. We demonstrate that compared with direct learning, CL facilitates controller training to achieve better performance. In the experiments, to study realistic scenarios where renewable forecasts used for decision-making are in general imperfect, the trained RL controllers are compared with two model predictive controllers (MPCs) using renewable forecasts with different error levels and observe how these controllers can hedge against the uncertainty. Results show that RL controllers are less susceptible to forecast errors than the baseline MPCs and can provide a more reliable restoration process.
更多
查看译文
关键词
distribution system critical
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要