Augmented Mixed Vehicular Platoon Control With Dense Communication Reinforcement Learning for Traffic Oscillation Alleviation

被引:1
|
作者
Li, Meng [1 ,2 ]
Cao, Zehong [3 ]
Li, Zhibin [1 ]
机构
[1] Southeast Univ, Sch Transportat, Nanjing 210096, Peoples R China
[2] Nanyang Technol Univ, Sch Mech & Aerosp Engn, Singapore 639798, Singapore
[3] Univ South Australia, STEM, Adelaide, SA 5000, Australia
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 22期
基金
中国国家自然科学基金;
关键词
Oscillators; Training; Fluctuations; Safety; Internet of Things; Optimization; Vehicle dynamics; Augmented Intelligence of Things (AIoT); connected autonomous vehicle (CAV); mixed vehicular platoon; reinforcement learning; traffic oscillation; CAR; VEHICLES; MODEL;
D O I
10.1109/JIOT.2024.3409618
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Traffic oscillations present significant challenges to road transportation systems, resulting in reduced fuel efficiency, heightened crash risks, and severe congestion. Recently emerging Augmented Intelligence of Things (AIoT) technology holds promise for enhancing traffic flow through vehicle-road cooperation. A representative application involves using deep reinforcement learning (DRL) techniques to control connected autonomous vehicle (CAV) platoons to alleviate traffic oscillations. However, uncertainties in human-driven vehicles (HDVs) driving behavior and the random distribution of CAVs make it challenging to achieve effective traffic oscillation alleviation in the Internet of Things environment. Existing DRL-based mixed vehicular platoon control strategies underutilize downstream traffic data, impairing CAVs' ability to predict and mitigate traffic oscillations, leading to inefficient speed adjustments and discomfort. This article proposes a dense communication cooperative RL policy for mixed vehicular platoons to address these challenges. It employs a parameter-sharing structure and a dense information flow topology, enabling CAVs to proactively respond to traffic oscillations while accommodating arbitrary vehicle distributions and communication failures. Experimental results demonstrate superior performance of the proposed strategy in driving efficiency, comfort, and safety, particularly in scenarios involving multivehicle cut-ins or cut-outs and communication failures.
引用
收藏
页码:35989 / 36001
页数:13
相关论文
共 50 条
  • [31] A Reinforcement Learning-Based Vehicle Platoon Control Strategy for Reducing Energy Consumption in Traffic Oscillations
    Li, Meng
    Cao, Zehong
    Li, Zhibin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (12) : 5309 - 5322
  • [32] Preventing congestion control oscillation in cellular vehicular communication
    Park, Yongtae
    Kim, Hyogon
    ELECTRONICS LETTERS, 2021, 57 (24) : 927 - 929
  • [33] Reinforcement Learning with Iterative Reasoning for Merging in Dense Traffic
    Bouton, Maxime
    Nakhaei, Alireza
    Isele, David
    Fujimura, Kikuo
    Kochenderfer, Mykel J.
    2020 IEEE 23RD INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2020,
  • [34] Reinforcement Learning for Platooning Control in Vehicular Networks
    Gomides, Thiago S.
    Kranakis, Evangelos
    Lambadaris, Ioannis
    Viniotis, Yannis
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 6856 - 6861
  • [35] Multi-Agent Deep Reinforcement Learning for Urban Traffic Light Control in Vehicular Networks
    Wu, Tong
    Zhou, Pan
    Liu, Kai
    Yuan, Yali
    Wang, Xiumin
    Huang, Huawei
    Wu, Dapeng Oliver
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (08) : 8243 - 8256
  • [36] AVDDPG - Federated reinforcement learning applied to autonomous platoon control
    Boin, Christian
    Lei, Lei
    Yang, Simon X.
    INTELLIGENCE & ROBOTICS, 2022, 2 (02):
  • [37] Collaborative Control of Vehicle Platoon Based on Deep Reinforcement Learning
    Chen, Jianzhong
    Wu, Xiaobao
    Lv, Zekai
    Xu, Zhihe
    Wang, Wenjie
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (10) : 14399 - 14414
  • [38] Scalable Reinforcement Learning Framework for Traffic Signal Control Under Communication Delays
    Pang, Aoyu
    Wang, Maonan
    Chen, Yirong
    Pun, Man-On
    Lepech, Michael
    IEEE OPEN JOURNAL OF VEHICULAR TECHNOLOGY, 2024, 5 : 330 - 343
  • [39] Multiagent Reinforcement Learning for Ecological Car-Following Control in Mixed Traffic
    Wang, Qun
    Ju, Fei
    Wang, Huaiyu
    Qian, Yahui
    Zhu, Meixin
    Zhuang, Weichao
    Wang, Liangmo
    IEEE TRANSACTIONS ON TRANSPORTATION ELECTRIFICATION, 2024, 10 (04): : 8671 - 8684
  • [40] Traffic Signal Control by Distributed Reinforcement Learning with Min-sum Communication
    Chu, Tianshu
    Wang, Jie
    2017 AMERICAN CONTROL CONFERENCE (ACC), 2017, : 5095 - 5100