Sharing Control Knowledge Among Heterogeneous Intersections: A Distributed Arterial Traffic Signal Coordination Method Using Multi-Agent Reinforcement Learning

被引:0
|
作者
Zhu, Hong [1 ]
Feng, Jialong [1 ]
Sun, Fengmei [1 ]
Tang, Keshuang [1 ]
Zang, Di [2 ,3 ]
Kang, Qi [4 ,5 ]
机构
[1] Tongji Univ, Coll Transportat Engn, Key Lab Rd & Traff Engn, Minist Educ, Shanghai 201804, Peoples R China
[2] Tongji Univ, Dept Comp Sci & Technol, Shanghai 200092, Peoples R China
[3] Tongji Univ, Serv Comp, Key Lab Embedded Syst, Minist Educ, Shanghai 200092, Peoples R China
[4] Tongji Univ, Dept Control Sci & Engn, Shanghai 201804, Peoples R China
[5] Tongji Univ, Shanghai Inst Intelligent Sci & Technol, Shanghai 200092, Peoples R China
基金
中国国家自然科学基金;
关键词
Optimization; Adaptation models; Process control; Reinforcement learning; Training; Stability criteria; Roads; Real-time systems; Electronic mail; Delays; Arterial traffic signal control; multi-agent reinforcement learning; proximal policy optimization; experience sharing; REAL-TIME; MODEL; SYSTEM;
D O I
10.1109/TITS.2024.3521514
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Treating each intersection as basic agent, multi-agent reinforcement learning (MARL) methods have emerged as the predominant approach for distributed adaptive traffic signal control (ATSC) in multi-intersection scenarios, such as arterial coordination. MARL-based ATSC currently faces two challenges: disturbances from the control policies of other intersections may impair the learning and control stability of the agents; and the heterogeneous features across intersections may complicate coordination efforts. To address these challenges, this study proposes a novel MARL method for distributed ATSC in arterials, termed the Distributed Controller for Heterogeneous Intersections (DCHI). The DCHI method introduces a Neighborhood Experience Sharing (NES) framework, wherein each agent utilizes both local data and shared experiences from adjacent intersections to improve its control policy. Within this framework, the neural networks of each agent are partitioned into two parts following the Knowledge Homogenizing Encapsulation (KHE) mechanism. The first part manages heterogeneous intersection features and transforms the control experiences, while the second part optimizes homogeneous control logic. Experimental results demonstrate that the proposed DCHI achieves efficiency improvements in average travel time of over 30% compared to traditional methods and yields similar performance to the centralized sharing method. Furthermore, vehicle trajectories reveal that DCHI can adaptively establish green wave bands in a distributed manner. Given its superior control performance, accommodation of heterogeneous intersections, and low reliance on information networks, DCHI could significantly advance the application of MARL-based ATSC methods in practice.
引用
收藏
页码:2760 / 2776
页数:17
相关论文
共 50 条
  • [1] Distributed Signal Control of Arterial Corridors Using Multi-Agent Deep Reinforcement Learning
    Zhang, Weibin
    Yan, Chen
    Li, Xiaofeng
    Fang, Liangliang
    Wu, Yao-Jan
    Li, Jun
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (01) : 178 - 190
  • [2] Multiple intersections traffic signal control based on cooperative multi-agent reinforcement learning
    Liu, Junxiu
    Qin, Sheng
    Su, Min
    Luo, Yuling
    Wang, Yanhu
    Yang, Su
    INFORMATION SCIENCES, 2023, 647
  • [3] Multi-agent Reinforcement Learning for Traffic Signal Control
    Prabuchandran, K. J.
    Kumar, Hemanth A. N.
    Bhatnagar, Shalabh
    2014 IEEE 17TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2014, : 2529 - 2534
  • [4] A multi-agent deep reinforcement learning approach for traffic signal coordination
    Hu, Ta-Yin
    Li, Zhuo-Yu
    IET INTELLIGENT TRANSPORT SYSTEMS, 2024, 18 (08) : 1428 - 1444
  • [5] An Improved Traffic Signal Control Method Based on Multi-agent Reinforcement Learning
    Xu, Jianyou
    Zhang, Zhichao
    Zhang, Shuo
    Miao, Jiayao
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 6612 - 6616
  • [6] Distributed, Heterogeneous, Multi-Agent Social Coordination via Reinforcement Learning
    Shi, Dongqing
    Sauter, Michael Z.
    Kralik, Jerald D.
    2009 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (ROBIO 2009), VOLS 1-4, 2009, : 653 - 658
  • [7] Urban Traffic Control Using Distributed Multi-agent Deep Reinforcement Learning
    Kitagawa, Shunya
    Moustafa, Ahmed
    Ito, Takayuki
    PRICAI 2019: TRENDS IN ARTIFICIAL INTELLIGENCE, PT III, 2019, 11672 : 337 - 349
  • [8] Multi-Agent Meta-Reinforcement Learning with Coordination and Reward Shaping for Traffic Signal Control
    Du, Xin
    Wang, Jiahai
    Chen, Siyuan
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2023, PT II, 2023, 13936 : 349 - 360
  • [9] Sharing of Energy Among Cooperative Households Using Distributed Multi-Agent Reinforcement Learning
    Ebell, Niklas
    Guetlein, Moritz
    Pruckner, Marco
    PROCEEDINGS OF 2019 IEEE PES INNOVATIVE SMART GRID TECHNOLOGIES EUROPE (ISGT-EUROPE), 2019,
  • [10] Multi-agent deep reinforcement learning with traffic flow for traffic signal control
    Hou, Liang
    Huang, Dailin
    Cao, Jie
    Ma, Jialin
    JOURNAL OF CONTROL AND DECISION, 2025, 12 (01) : 81 - 92