Dynamic traffic signal control for heterogeneous traffic conditions using Max Pressure and Reinforcement Learning

被引:2
|
作者
Agarwal, Amit [1 ]
Sahu, Deorishabh [1 ]
Mohata, Rishabh [1 ]
Jeengar, Kuldeep [2 ]
Nautiyal, Anuj [1 ]
Saxena, Dhish Kumar [2 ]
机构
[1] Indian Inst Technol Roorkee, Dept Civil Engn, Haridwar 247667, Uttaranchal, India
[2] Indian Inst Technol Roorkee, Dept Mech & Ind Engn, Roorkee 247667, Uttaranchal, India
关键词
Adaptive Traffic Signal Control; Max Pressure; Mixed traffic; Reinforcement Learning; SATURATION FLOW; REAL-TIME; SYSTEM;
D O I
10.1016/j.eswa.2024.124416
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Optimization of green signal timing for each phase at a signalized intersection in an urban area is critical for efficacious traffic management and congestion mitigation. Many algorithms are developed, yet very few are for cities in developing nations where traffic is characterized by its heterogeneous nature. While some of the recent studies have explored different variants of Max Pressure (MP) and Reinforcement Learning (RL) for optimizing phase timing, the focus is limited to homogeneous traffic conditions. In developing nations, such as India, control systems, like fixed and actuated, are still predominantly used in practice. Composite Signal Control Strategy (CoSiCoSt) is also employed at a few intersections. However, there is a notable absence of advanced models addressing heterogeneous traffic behavior, which have a great potential to reduce delays and queue lengths. The present study proposes a hybrid algorithm for an adaptive traffic control system for realworld heterogeneous traffic conditions. The proposed algorithm integrates Max Pressure with Reinforcement Learning. The former dynamically determines phase order by performing pressure calculations for each phase. The latter optimizes the phase timings for each phase to minimize delays and queue lengths using the proximal policy optimization algorithm. In contrast to the past RL models, in which the phase timing is determined for all phases at once, in the proposed algorithm, the phase timings are determined after the execution of every phase. To assess the impact, classified traffic volume is extracted from surveillance videos of an intersection in Ludhiana, Punjab and simulated using Simulation of Urban Mobility (SUMO). Traffic volume data is collected over three distinct time periods of the day. The results of the proposed algorithm are compared with benchmark algorithms, such as Actuated, CoSiCoSt, and acyclic & cyclic Max Pressure, Reinforcement Learning -based algorithms. To assess the performance, queue length, delay, and queue dissipation time are considered as key performance indicators. Of actuated and CoSiCoSt, the latter performs better, and thus, the performance of the proposed hybrid algorithm is compared with CoSiCoSt. The proposed algorithm effectively reduces total delay and queue dissipation time in the range of 77.07%-87.66% and 53.95%-62.07%, respectively. Similarly, with respect to the best -performing RL model, the drop in delay and queue dissipation time range from 55.63 to 77.12% and 22.13 to 43.7%, respectively, which is significant at 99% confidence level. The proposed algorithm is deployed on a wireless hardware architecture to confirm the feasibility of real -world implementation. The findings highlight the algorithm's potential as an efficient solution for queues and delays at signalized intersections, where mixed traffic conditions prevail.
引用
收藏
页数:22
相关论文
共 50 条
  • [41] Heterogeneous traffic intersections control design based on reinforcement learning
    Shen, Jiajing
    Hu, Jiaxing
    Zhao, Qinpei
    Rao, Weixiong
    IET INTELLIGENT TRANSPORT SYSTEMS, 2024, 18 (10) : 1760 - 1776
  • [42] Learning Cooperative Max-Pressure Control by Leveraging Downstream Intersections Information for Traffic Signal Control
    Peng, Yuquan
    Li, Lin
    Xie, Qing
    Tao, Xiaohui
    WEB AND BIG DATA, APWEB-WAIM 2021, PT II, 2021, 12859 : 399 - 413
  • [43] Reinforcement Learning for Traffic Signal Control in Hybrid Action Space
    Luo, Haoqing
    Bie, Yiming
    Jin, Sheng
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (06) : 5225 - 5241
  • [44] A Regional Traffic Signal Control Strategy with Deep Reinforcement Learning
    Li, Congcong
    Yan, Fei
    Zhou, Yiduo
    Wu, Jia
    Wang, Xiaomin
    2018 37TH CHINESE CONTROL CONFERENCE (CCC), 2018, : 7690 - 7695
  • [45] A Survey on Reinforcement Learning Models and Algorithms for Traffic Signal Control
    Yau, Kok-Lim Alvin
    Qadir, Junaid
    Khoo, Hooi Ling
    Ling, Mee Hong
    Komisarczuk, Peter
    ACM COMPUTING SURVEYS, 2017, 50 (03)
  • [46] A Deep Reinforcement Learning Approach for Fair Traffic Signal Control
    Raeis, Majid
    Leon-Garcia, Alberto
    2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, : 2512 - 2518
  • [47] Implementing Traffic Signal Optimal Control by Multiagent Reinforcement Learning
    Song, Jiong
    Jin, Zhao
    Zhu, WenJun
    2011 INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND NETWORK TECHNOLOGY (ICCSNT), VOLS 1-4, 2012, : 2578 - 2582
  • [48] Multi-agent Reinforcement Learning for Traffic Signal Control
    Prabuchandran, K. J.
    Kumar, Hemanth A. N.
    Bhatnagar, Shalabh
    2014 IEEE 17TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2014, : 2529 - 2534
  • [49] Cooperation Skill Motivated Reinforcement Learning for Traffic Signal Control
    Xin, Jie
    Zeng, Jing
    Cong, Ya
    Jiang, Weihao
    Pu, Shiliang
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [50] Reinforcement learning agents for traffic signal control in oversaturated networks
    Medina, Juan C.
    Benekohal, Rahim F.
    T and DI Congress 2011: Integrated Transportation and Development for a Better Tomorrow - Proceedings of the 1st Congress of the Transportation and Development Institute of ASCE, 2011, : 132 - 141