Bootstrapping Reinforcement Learning with Imitation for Vision-Based Agile Flight
CoRR(2024)
摘要
We combine the effectiveness of Reinforcement Learning (RL) and the
efficiency of Imitation Learning (IL) in the context of vision-based,
autonomous drone racing. We focus on directly processing visual input without
explicit state estimation. While RL offers a general framework for learning
complex controllers through trial and error, it faces challenges regarding
sample efficiency and computational demands due to the high dimensionality of
visual inputs. Conversely, IL demonstrates efficiency in learning from visual
demonstrations but is limited by the quality of those demonstrations and faces
issues like covariate shift. To overcome these limitations, we propose a novel
training framework combining RL and IL's advantages. Our framework involves
three stages: initial training of a teacher policy using privileged state
information, distilling this policy into a student policy using IL, and
performance-constrained adaptive RL fine-tuning. Our experiments in both
simulated and real-world environments demonstrate that our approach achieves
superior performance and robustness than IL or RL alone in navigating a
quadrotor through a racing course using only visual information without
explicit state estimation.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要