Risk-Anticipatory Autonomous Driving Strategies Considering Vehicles' Weights Based on Hierarchical Deep Reinforcement Learning

被引:2
|
作者
Chen, Di [1 ,2 ]
Li, Hao [3 ]
Jin, Zhicheng [1 ,2 ]
Tu, Huizhao [3 ]
Zhu, Meixin [4 ,5 ,6 ]
机构
[1] Tongji Univ, Coll Transportat Engn, Shanghai 201804, Peoples R China
[2] Hong Kong Polytech Univ, Dept Elect & Elect Engn, Hong Kong, Peoples R China
[3] Tongji Univ, Coll Transportat Engn, Key Lab Rd & Traff Engn, Minist Educ, Shanghai 201804, Peoples R China
[4] Hong Kong Univ Sci & Technol Guangzhou, Syst Hub, Guangzhou, Peoples R China
[5] Hong Kong Univ Sci & Technol, Civil & Environm Engn Dept, Hong Kong, Peoples R China
[6] Guangdong Prov Key Lab Integrated Commun Sensing, Guangzhou, Peoples R China
基金
中国国家自然科学基金;
关键词
Autonomous vehicles; decision making; driving risk; driving safety; reinforcement learning; DECISION-MAKING; MITIGATION; CRASHES; TIME; ROAD;
D O I
10.1109/TITS.2024.3458439
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Autonomous vehicles (AVs) have the potential to prevent accidents caused by drivers' errors and reduce road traffic risks. Due to the nature of heavy vehicles, whose collisions cause more serious crashes, the weights of vehicles need to be considered when making driving strategies aimed at reducing the potential risks and their consequences in the context of autonomous driving. This study develops an autonomous driving strategy based on risk anticipation, considering the weights of surrounding vehicles and using hierarchical deep reinforcement learning. A risk indicator integrating surrounding vehicles' weights, based on the risk field theory, is proposed and incorporated into autonomous driving decisions. A hybrid action space is designed to allow for left lane changes, right lane changes and car-following, which enables AVs to act more freely and realistically whenever possible. To solve the above hybrid decision-making problem, a hierarchical proximal policy optimization (HPPO) algorithm with an attention mechanism (AT-HPPO) is developed, providing great advantages in maintaining stable performance with high robustness and generalization. An indicator, potential collision energy in conflicts (PCEC), is newly proposed to evaluate the performance of the developed AV driving strategy from the perspective of the consequences of potential accidents. The performance evaluation results in simulation and dataset demonstrate that our model provides driving strategies that reduce both the likelihood and consequences of potential accidents, at the same time maintaining driving efficiency. The developed method is especially meaningful for AVs driving on highways, where heavy vehicles make up a high proportion of the traffic.
引用
收藏
页码:19605 / 19618
页数:14
相关论文
共 50 条
  • [21] Zero-shot Deep Reinforcement Learning Driving Policy Transfer for Autonomous Vehicles based on Robust Control
    Xu, Zhuo
    Tang, Chen
    Tomizuka, Masayoshi
    2018 21ST INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2018, : 2865 - 2871
  • [22] A Hierarchical Framework for Multi-Lane Autonomous Driving Based on Reinforcement Learning
    Zhang, Xiaohui
    Sun, Jie
    Wang, Yunpeng
    Sun, Jian
    IEEE OPEN JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 4 : 626 - 638
  • [23] Cooperative Autonomous Driving Control among Vehicles of Different Sizes Using Deep Reinforcement Learning
    Takenaka, Akito
    Harada, Tomohiro
    Miura, Yukiya
    Hattori, Kiyohiko
    Matuoka, Johei
    2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024, 2024,
  • [24] Deep Reinforcement Learning with Intervention Module for Autonomous Driving
    Chi, Huicong
    Wang, Ping
    Wang, Chao
    Wang, Xinhong
    2022 IEEE 96TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-FALL), 2022,
  • [25] Dynamic Input for Deep Reinforcement Learning in Autonomous Driving
    Huegle, Maria
    Kalweit, Gabriel
    Mirchevska, Branka
    Werling, Moritz
    Boedecker, Joschka
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 7566 - 7573
  • [26] Deep Reinforcement Learning with Noisy Exploration for Autonomous Driving
    Li, Ruyang
    Zhang, Yaqiang
    Zhao, Yaqian
    Wei, Hui
    Xu, Zhe
    Zhao, Kun
    PROCEEDINGS OF 2022 THE 6TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND SOFT COMPUTING, ICMLSC 20222, 2022, : 8 - 14
  • [27] Autonomous Highway Driving using Deep Reinforcement Learning
    Nageshrao, Subramanya
    Tseng, H. Eric
    Filev, Dimitar
    2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), 2019, : 2326 - 2331
  • [28] Distributed Deep Reinforcement Learning on the Cloud for Autonomous Driving
    Spryn, Mitchell
    Sharma, Aditya
    Parkar, Dhawal
    Shrimal, Madhur
    PROCEEDINGS 2018 IEEE/ACM 1ST INTERNATIONAL WORKSHOP ON SOFTWARE ENGINEERING FOR AI IN AUTONOMOUS SYSTEMS (SEFAIAS), 2018, : 16 - 22
  • [29] Evaluation of Deep Reinforcement Learning Algorithms for Autonomous Driving
    Stang, Marco
    Grimm, Daniel
    Gaiser, Moritz
    Sax, Eric
    2020 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2020, : 1576 - 1582
  • [30] A Deep Reinforcement Learning Approach for Autonomous Highway Driving
    Zhao, Junwu
    Qu, Ting
    Xu, Fang
    IFAC PAPERSONLINE, 2020, 53 (05): : 542 - 546