XLight: An interpretable multi-agent reinforcement learning approach for traffic signal control

被引:0
|
作者
Cai, Sibin [1 ]
Fang, Jie [1 ]
Xu, Mengyun [1 ,2 ]
机构
[1] Fuzhou Univ, Coll Civil Engn, Fuzhou 350108, Peoples R China
[2] Natl Univ Singapore, Dept Civil & Environm Engn, Singapore 119077, Singapore
基金
中国国家自然科学基金;
关键词
Multi-agent reinforcement learning; Traffic signal control; Interpretability; Regulatable function; Maximum entropy policy optimization;
D O I
10.1016/j.eswa.2025.126938
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, deep reinforcement learning (DRL)-based traffic signal control (TSC) methods have garnered significant attention among researchers, achieving substantial progress. However, current research often focuses on performance improvement, neglecting interpretability. DRL-based TSC methods often face challenges in interpretability. This limitation poses significant obstacles to practical deployment, given the liability and regulatory constraints faced by governmental authorities responsible for traffic management and control. On the other hand, interpretable RL-based TSC methods offer greater flexibility to meet specific requirements. For instance, prioritizing the clearance of vehicles in a particular movement can be easily achieved by assigning higher weights to the state variables associated with that movement. To address this issue, we propose Xlight, an interpretable multi-agent reinforcement learning (MARL) approach for TSC, which enhances interpretability in three key aspects: (a) meticulously designing and selecting the state space, action space, and reward function. Especially, we propose an interpretable reward function for network-wide TSC and prove that maximizing this reward is equivalent to minimizing the average travel time (ATT) in the road network; (b) introducing more practical regulatable (i.e., interpretable) functions as TSC controllers; and (c) employing maximum entropy policy optimization, which simultaneously enhances interpretability and improves transferability. Next, to better align with practical applications of network-wide TSC, we propose several interpretable MARL-based methods. Among these, Multi-Agent Regulatable Soft Actor-Critic (MARSAC) not only possesses interpretability but also achieves superior performance. Finally, comprehensive experiments conducted across various TSC scenarios, including isolated intersection, synthetic network-wide intersections, and real-world network-wide intersections, demonstrate the effectiveness. For example, in terms of the ATT metric, our proposed method achieves improvements of 9.55%, 34.17%, 3.98%, and 42.93% compared to the Actuated Traffic Signal Control (ATSC) across a synthetic road network and 3 real-world road networks. Furthermore, in the synthetic network, our method demonstrates improvements of 4.04% and 3.21% in the Safety Score and Fuel Consumption metrics, respectively, when compared to the ATSC.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Multi-Agent Reinforcement Learning for Traffic Signal Control: A Cooperative Approach
    Kolat, Mate
    Kovari, Balint
    Becsi, Tamas
    Aradi, Szilard
    SUSTAINABILITY, 2023, 15 (04)
  • [2] Multi-agent Reinforcement Learning for Traffic Signal Control
    Prabuchandran, K. J.
    Kumar, Hemanth A. N.
    Bhatnagar, Shalabh
    2014 IEEE 17TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2014, : 2529 - 2534
  • [3] A multi-agent reinforcement learning based approach for intelligent traffic signal control
    Benhamza, Karima
    Seridi, Hamid
    Agguini, Meriem
    Bentagine, Amel
    EVOLVING SYSTEMS, 2024, 15 (06) : 2383 - 2397
  • [4] Multi-agent deep reinforcement learning with traffic flow for traffic signal control
    Hou, Liang
    Huang, Dailin
    Cao, Jie
    Ma, Jialin
    JOURNAL OF CONTROL AND DECISION, 2025, 12 (01) : 81 - 92
  • [5] A multi-agent deep reinforcement learning approach for traffic signal coordination
    Hu, Ta-Yin
    Li, Zhuo-Yu
    IET INTELLIGENT TRANSPORT SYSTEMS, 2024, 18 (08) : 1428 - 1444
  • [6] Cooperative Traffic Signal Control Based on Multi-agent Reinforcement Learning
    Gao, Ruowen
    Liu, Zhihan
    Li, Jinglin
    Yuan, Quan
    BLOCKCHAIN AND TRUSTWORTHY SYSTEMS, BLOCKSYS 2019, 2020, 1156 : 787 - 793
  • [7] Hierarchical graph multi-agent reinforcement learning for traffic signal control
    Yang, Shantian
    INFORMATION SCIENCES, 2023, 634 : 55 - 72
  • [8] Causal inference multi-agent reinforcement learning for traffic signal control
    Yang, Shantian
    Yang, Bo
    Zeng, Zheng
    Kang, Zhongfeng
    INFORMATION FUSION, 2023, 94 : 243 - 256
  • [9] Engineering A Large-Scale Traffic Signal Control: A Multi-Agent Reinforcement Learning Approach
    Chen, Yue
    Li, Changle
    Yue, Wenwei
    Zhang, Hehe
    Mao, Guoqiang
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,
  • [10] An Improved Traffic Signal Control Method Based on Multi-agent Reinforcement Learning
    Xu, Jianyou
    Zhang, Zhichao
    Zhang, Shuo
    Miao, Jiayao
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 6612 - 6616