Dynamic traffic signal control for heterogeneous traffic conditions using Max Pressure and Reinforcement Learning

被引:2
|
作者
Agarwal, Amit [1 ]
Sahu, Deorishabh [1 ]
Mohata, Rishabh [1 ]
Jeengar, Kuldeep [2 ]
Nautiyal, Anuj [1 ]
Saxena, Dhish Kumar [2 ]
机构
[1] Indian Inst Technol Roorkee, Dept Civil Engn, Haridwar 247667, Uttaranchal, India
[2] Indian Inst Technol Roorkee, Dept Mech & Ind Engn, Roorkee 247667, Uttaranchal, India
关键词
Adaptive Traffic Signal Control; Max Pressure; Mixed traffic; Reinforcement Learning; SATURATION FLOW; REAL-TIME; SYSTEM;
D O I
10.1016/j.eswa.2024.124416
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Optimization of green signal timing for each phase at a signalized intersection in an urban area is critical for efficacious traffic management and congestion mitigation. Many algorithms are developed, yet very few are for cities in developing nations where traffic is characterized by its heterogeneous nature. While some of the recent studies have explored different variants of Max Pressure (MP) and Reinforcement Learning (RL) for optimizing phase timing, the focus is limited to homogeneous traffic conditions. In developing nations, such as India, control systems, like fixed and actuated, are still predominantly used in practice. Composite Signal Control Strategy (CoSiCoSt) is also employed at a few intersections. However, there is a notable absence of advanced models addressing heterogeneous traffic behavior, which have a great potential to reduce delays and queue lengths. The present study proposes a hybrid algorithm for an adaptive traffic control system for realworld heterogeneous traffic conditions. The proposed algorithm integrates Max Pressure with Reinforcement Learning. The former dynamically determines phase order by performing pressure calculations for each phase. The latter optimizes the phase timings for each phase to minimize delays and queue lengths using the proximal policy optimization algorithm. In contrast to the past RL models, in which the phase timing is determined for all phases at once, in the proposed algorithm, the phase timings are determined after the execution of every phase. To assess the impact, classified traffic volume is extracted from surveillance videos of an intersection in Ludhiana, Punjab and simulated using Simulation of Urban Mobility (SUMO). Traffic volume data is collected over three distinct time periods of the day. The results of the proposed algorithm are compared with benchmark algorithms, such as Actuated, CoSiCoSt, and acyclic & cyclic Max Pressure, Reinforcement Learning -based algorithms. To assess the performance, queue length, delay, and queue dissipation time are considered as key performance indicators. Of actuated and CoSiCoSt, the latter performs better, and thus, the performance of the proposed hybrid algorithm is compared with CoSiCoSt. The proposed algorithm effectively reduces total delay and queue dissipation time in the range of 77.07%-87.66% and 53.95%-62.07%, respectively. Similarly, with respect to the best -performing RL model, the drop in delay and queue dissipation time range from 55.63 to 77.12% and 22.13 to 43.7%, respectively, which is significant at 99% confidence level. The proposed algorithm is deployed on a wireless hardware architecture to confirm the feasibility of real -world implementation. The findings highlight the algorithm's potential as an efficient solution for queues and delays at signalized intersections, where mixed traffic conditions prevail.
引用
收藏
页数:22
相关论文
共 50 条
  • [21] Mitigating Action Hysteresis in Traffic Signal Control with Traffic Predictive Reinforcement Learning
    Han, Xiao
    Zhao, Xiangyu
    Zhang, Liang
    Wang, Wanyu
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 673 - 684
  • [22] A Deep Reinforcement Learning Approach to Traffic Signal Control
    Razack, Aquib Junaid
    Ajith, Vysyakh
    Gupta, Rajiv
    2021 IEEE CONFERENCE ON TECHNOLOGIES FOR SUSTAINABILITY (SUSTECH2021), 2021,
  • [23] Reinforcement Learning With Function Approximation for Traffic Signal Control
    Prashanth, L. A.
    Bhatnagar, Shalabh
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2011, 12 (02) : 412 - 421
  • [24] Deep Reinforcement Learning for Traffic Signal Control: A Review
    Rasheed, Faizan
    Yau, Kok-Lim Alvin
    Noor, Rafidah Md.
    Wu, Celimuge
    Low, Yeh-Ching
    IEEE ACCESS, 2020, 8 : 208016 - 208044
  • [25] A Survey on Deep Reinforcement Learning for Traffic Signal Control
    Miao, Wei
    Li, Long
    Wang, Zhiwen
    PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021), 2021, : 1092 - 1097
  • [26] Reinforcement learning for True Adaptive traffic signal control
    Abdulhai, B
    Pringle, R
    Karakoulas, GJ
    JOURNAL OF TRANSPORTATION ENGINEERING, 2003, 129 (03) : 278 - 285
  • [27] Robust Deep Reinforcement Learning for Traffic Signal Control
    Kai Liang Tan
    Anuj Sharma
    Soumik Sarkar
    Journal of Big Data Analytics in Transportation, 2020, 2 (3): : 263 - 274
  • [28] Cooperative Max-Pressure Enhanced Traffic Signal Control
    Li, Lin
    Li, Renbo
    Peng, Yuquan
    Huang, Chuanming
    Yuan, Jingling
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 4173 - 4177
  • [29] A Deep Reinforcement Learning-Based Cooperative Traffic Signal System Through Dual-Sensing Max Pressure Control
    Yan, Tianwen
    Zuo, Lei
    Yan, Maode
    Zhang, Jinqi
    2023 9th International Conference on Mechanical and Electronics Engineering, ICMEE 2023, 2023, : 258 - 264
  • [30] A Comparative Study of Urban Traffic Signal Control with Reinforcement Learning and Adaptive Dynamic Programming
    Dai, Yujie
    Zhao, Dongbin
    Yi, Jianqiang
    2010 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS IJCNN 2010, 2010,