Model-data-driven control for human-leading vehicle platoon

Junru Yang,Duanfeng Chu,Liping Lu, Zhenghua Meng, Kun Deng

PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART D-JOURNAL OF AUTOMOBILE ENGINEERING(2024)

引用 0|浏览3
暂无评分
摘要
This paper proposes a model-data-driven control method for a human-leading vehicle platoon, comprising a human-driven vehicle (HDV) as the leader and connected automated vehicles (CAVs) as followers. Initially, a representative trajectory of HDVs is constructed using principal component analysis and the K-means clustering algorithm, which is utilized as training dataset. Subsequently, we propose a novel platooning method, named deep reinforcement learning with model-based guidance (DRLMG). The output of model predictive control (MPC) is integrated into the input state and reward function of the deep reinforcement learning (DRL) algorithm. The DRL algorithm benefits from guidance provided by MPC, leading to more optimal decision-making. To ensure safety and stability, a safety filter is designed using control barrier function and the control Lyapunov function. Simulation experiments with real-world driving data show that DRLMG outperforms MPC, reducing speed error, spacing error, and acceleration change rate by 17.9%, 53.7%, and 47.1%, respectively. In comparison to pure DRL, DRLMG increases spacing error by 6.5% but reduces speed error by 15.4% and acceleration change rate by 14.3%. The proposed method enhances DRL's generalization capability, dampens traffic oscillations caused by the leading HDV, and guarantees driving safety and stability.
更多
查看译文
关键词
Human-leading vehicle platoon,model-data-driven,model predictive control (MPC),deep reinforcement learning (DRL),safety filter
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要