COOR-PLT: A hierarchical control model for coordinating adaptive platoons of connected and autonomous vehicles at signal-free intersections based on deep reinforcement learning

被引:29
|
作者
Li, Duowei [1 ,2 ]
Zhu, Feng [2 ]
Chen, Tianyi [2 ]
Wong, Yiik Diew [2 ]
Zhu, Chunli [3 ,4 ]
Wu, Jianping [1 ]
机构
[1] Tsinghua Univ, Dept Civil Engn, Beijing, Peoples R China
[2] Nanyang Technol Univ, Sch Civil & Environm Engn, Singapore, Singapore
[3] Beijing Inst Technol, Sch Informat & Elect, Beijing, Peoples R China
[4] Beijing Inst Technol, Adv Res Inst Multidisciplinary Sci, Beijing, Peoples R China
关键词
Connected and autonomous vehicle (CAV); Signal-free intersection; Adaptive platoon; Multi-agent coordination; Hierarchical control; Deep reinforcement learning; AUTOMATED VEHICLES; MANAGEMENT; THROUGHPUT; FRAMEWORK; SYSTEM;
D O I
10.1016/j.trc.2022.103933
中图分类号
U [交通运输];
学科分类号
08 ; 0823 ;
摘要
Platooning and coordination are two implementation strategies that are frequently proposed for traffic control of connected and autonomous vehicles (CAVs) at signal-free intersections instead of using conventional traffic signals. However, few studies have attempted to integrate both strategies to better facilitate the CAV control at signal-free intersections. To this end, this study proposes a hierarchical control model, named COOR-PLT, to coordinate adaptive CAV platoons at a signal-free intersection based on deep reinforcement learning (DRL). COOR-PLT has a two-layer framework. The first layer uses a centralized control strategy to form adaptive platoons. The optimal size of each platoon is determined by considering multiple objectives (i.e., efficiency, fairness and energy saving). The second layer employs a decentralized control strategy to coordinate multiple platoons passing through the intersection. Each platoon is labeled with coordinated status or independent status, upon which its passing priority is determined. As an efficient DRL algorithm, Deep Q-network (DQN) is adopted to determine platoon sizes and passing priorities respectively in the two layers. The model is validated and examined on the simulator Simulation of Urban Mobility (SUMO). The simulation results demonstrate that the model is able to: (1) achieve satisfactory convergence performances; (2) adaptively determine platoon size in response to varying traffic conditions; and (3) completely avoid deadlocks at the intersection. By comparison with other control methods, the model manifests its superiority of adopting adaptive platooning and DRL-based coordination strategies. Also, the model outperforms several state-of-the-art methods on reducing travel time and fuel consumption in different traffic conditions.
引用
收藏
页数:27
相关论文
共 20 条
  • [1] Optimal Control for Connected and Autonomous Vehicles at Signal-Free Intersections
    Chen, Boli
    Pan, Xiao
    Evangelou, Simos A.
    Timotheou, Stelios
    IFAC PAPERSONLINE, 2020, 53 (02): : 15306 - 15311
  • [2] Traffic Signal and Autonomous Vehicle Control Model: An Integrated Control Model for Connected Autonomous Vehicles at Traffic-Conflicting Intersections Based on Deep Reinforcement Learning
    Li, Yisha
    Zhang, Hui
    Zhang, Ya
    JOURNAL OF TRANSPORTATION ENGINEERING PART A-SYSTEMS, 2025, 151 (02)
  • [3] Decentralized Model Predictive Control for Automated and Connected Electric Vehicles at Signal-free Intersections
    Pan, Xiao
    Chen, Boli
    Dai, Li
    Timotheou, Stelios
    Evangelou, Simos A.
    2021 60TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2021, : 2659 - 2664
  • [4] Adaptive Traffic Signal Control Model on Intersections Based on Deep Reinforcement Learning
    Li, Duowei
    Wu, Jianping
    Xu, Ming
    Wang, Ziheng
    Hu, Kezhen
    Journal of Advanced Transportation, 2020, 2020
  • [5] Adaptive Traffic Signal Control Model on Intersections Based on Deep Reinforcement Learning
    Li, Duowei
    Wu, Jianping
    Xu, Ming
    Wang, Ziheng
    Hu, Kezhen
    JOURNAL OF ADVANCED TRANSPORTATION, 2020, 2020
  • [6] A Decision-Making Model for Autonomous Vehicles at Intersections Based on Hierarchical Reinforcement Learning
    Chen, Xue-Mei
    Xu, Shu-Yuan
    Wang, Zi-Jia
    Zheng, Xue-Long
    Han, Xin-Tong
    Liu, En-Hao
    UNMANNED SYSTEMS, 2024, 12 (04) : 641 - 652
  • [7] An integrated model for coordinating adaptive platoons and parking decision-making based on deep reinforcement learning
    Li, Jia
    Guo, Zijian
    Jiang, Ying
    Wang, Wenyuan
    Li, Xin
    COMPUTERS & INDUSTRIAL ENGINEERING, 2025, 203
  • [8] A Privacy-Preserving-Based Distributed Collaborative Scheme for Connected Autonomous Vehicles at Multi-Lane Signal-Free Intersections
    Zhao, Yuan
    Gong, Dekui
    Wen, Shixi
    Ding, Lei
    Guo, Ge
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (07) : 6824 - 6835
  • [9] Hierarchical Optimization Based on Deep Reinforcement Learning for Connected Fuel Cell Hybrid Vehicles through Signalized Intersections
    Dong, Hongquan
    Zhao, Lingying
    Zhou, Hao
    Li, Haolin
    PROCESSES, 2023, 11 (09)
  • [10] Modeling adaptive platoon and reservation-based intersection control for connected and autonomous vehicles employing deep reinforcement learning
    Li, Duowei
    Wu, Jianping
    Zhu, Feng
    Chen, Tianyi
    Wong, Yiik Diew
    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2023, 38 (10) : 1346 - 1364