Robust Driving Policy Learning with Guided Meta Reinforcement Learning

CoRR(2023)

引用 0|浏览19
暂无评分
摘要
Although deep reinforcement learning (DRL) has shown promising results for autonomous navigation in interactive traffic scenarios, existing work typically adopts a fixed behavior policy to control social vehicles in the training environment. This may cause the learned driving policy to overfit the environment, making it difficult to interact well with vehicles with different, unseen behaviors. In this work, we introduce an efficient method to train diverse driving policies for social vehicles as a single meta-policy. By randomizing the interaction-based reward functions of social vehicles, we can generate diverse objectives and efficiently train the meta-policy through guiding policies that achieve specific objectives. We further propose a training strategy to enhance the robustness of the ego vehicle's driving policy using the environment where social vehicles are controlled by the learned meta-policy. Our method successfully learns an ego driving policy that generalizes well to unseen situations with out-of-distribution (OOD) social agents' behaviors in a challenging uncontrolled T-intersection scenario.
更多
查看译文
关键词
Policy Learning,Social Agents,Reward Function,Deep Reinforcement Learning,Training Environment,Variety Of Objects,Challenging Scenarios,Behavior Policy,Collision,Social Policy,Aggressive Behavior,Recurrent Neural Network,Autonomous Vehicles,Baseline Methods,Policy Variables,Value Orientation,Diverse Behaviors,Network Inference,Variational Autoencoder,Autonomous Agents,Preferred Range,Policy Agencies,Multi-agent Reinforcement Learning,Performance Of Agents,Agent Observes,Ablation Method,Low-level Control,Learning Agent,Real-world Setting,Latent State
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要