Dynamic traffic signal control for heterogeneous traffic conditions using Max Pressure and Reinforcement Learning

被引:2
|
作者
Agarwal, Amit [1 ]
Sahu, Deorishabh [1 ]
Mohata, Rishabh [1 ]
Jeengar, Kuldeep [2 ]
Nautiyal, Anuj [1 ]
Saxena, Dhish Kumar [2 ]
机构
[1] Indian Inst Technol Roorkee, Dept Civil Engn, Haridwar 247667, Uttaranchal, India
[2] Indian Inst Technol Roorkee, Dept Mech & Ind Engn, Roorkee 247667, Uttaranchal, India
关键词
Adaptive Traffic Signal Control; Max Pressure; Mixed traffic; Reinforcement Learning; SATURATION FLOW; REAL-TIME; SYSTEM;
D O I
10.1016/j.eswa.2024.124416
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Optimization of green signal timing for each phase at a signalized intersection in an urban area is critical for efficacious traffic management and congestion mitigation. Many algorithms are developed, yet very few are for cities in developing nations where traffic is characterized by its heterogeneous nature. While some of the recent studies have explored different variants of Max Pressure (MP) and Reinforcement Learning (RL) for optimizing phase timing, the focus is limited to homogeneous traffic conditions. In developing nations, such as India, control systems, like fixed and actuated, are still predominantly used in practice. Composite Signal Control Strategy (CoSiCoSt) is also employed at a few intersections. However, there is a notable absence of advanced models addressing heterogeneous traffic behavior, which have a great potential to reduce delays and queue lengths. The present study proposes a hybrid algorithm for an adaptive traffic control system for realworld heterogeneous traffic conditions. The proposed algorithm integrates Max Pressure with Reinforcement Learning. The former dynamically determines phase order by performing pressure calculations for each phase. The latter optimizes the phase timings for each phase to minimize delays and queue lengths using the proximal policy optimization algorithm. In contrast to the past RL models, in which the phase timing is determined for all phases at once, in the proposed algorithm, the phase timings are determined after the execution of every phase. To assess the impact, classified traffic volume is extracted from surveillance videos of an intersection in Ludhiana, Punjab and simulated using Simulation of Urban Mobility (SUMO). Traffic volume data is collected over three distinct time periods of the day. The results of the proposed algorithm are compared with benchmark algorithms, such as Actuated, CoSiCoSt, and acyclic & cyclic Max Pressure, Reinforcement Learning -based algorithms. To assess the performance, queue length, delay, and queue dissipation time are considered as key performance indicators. Of actuated and CoSiCoSt, the latter performs better, and thus, the performance of the proposed hybrid algorithm is compared with CoSiCoSt. The proposed algorithm effectively reduces total delay and queue dissipation time in the range of 77.07%-87.66% and 53.95%-62.07%, respectively. Similarly, with respect to the best -performing RL model, the drop in delay and queue dissipation time range from 55.63 to 77.12% and 22.13 to 43.7%, respectively, which is significant at 99% confidence level. The proposed algorithm is deployed on a wireless hardware architecture to confirm the feasibility of real -world implementation. The findings highlight the algorithm's potential as an efficient solution for queues and delays at signalized intersections, where mixed traffic conditions prevail.
引用
收藏
页数:22
相关论文
共 50 条
  • [31] Adaptive and Responsive Traffic Signal Control using Reinforcement Learning and Fog Computing
    Tang, Chengyu
    Baskiyar, Sanjeev
    2024 IEEE CLOUD SUMMIT, CLOUD SUMMIT 2024, 2024, : 36 - 41
  • [32] Traffic Signal Control Using Hybrid Action Space Deep Reinforcement Learning
    Bouktif, Salah
    Cheniki, Abderraouf
    Ouni, Ali
    SENSORS, 2021, 21 (07)
  • [33] Traffic Signal Control Using Deep Reinforcement Learning with Multiple Resources of Rewards
    Zhong, Dunhao
    Boukerche, Azzedine
    PE-WASUN'19: PROCEEDINGS OF THE 16TH ACM INTERNATIONAL SYMPOSIUM ON PERFORMANCE EVALUATION OF WIRELESS AD HOC, SENSOR, & UBIQUITOUS NETWORKS, 2019, : 23 - 28
  • [34] Two-layer coordinated reinforcement learning for traffic signal control in traffic network
    Ren, Fuyue
    Dong, Wei
    Zhao, Xiaodong
    Zhang, Fan
    Kong, Yaguang
    Yang, Qiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 235
  • [35] Using Reinforcement Learning With Partial Vehicle Detection for Intelligent Traffic Signal Control
    Zhang, Rusheng
    Ishikawa, Akihiro
    Wang, Wenli
    Striner, Benjamin
    Tonguz, Ozan K.
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (01) : 404 - 415
  • [36] Model-Based Deep Reinforcement Learning with Traffic Inference for Traffic Signal Control
    Wang, Hao
    Zhu, Jinan
    Gu, Bao
    APPLIED SCIENCES-BASEL, 2023, 13 (06):
  • [37] Traffic signal control in mixed traffic environment based on advance decision and reinforcement learning
    Yu Du
    Wei ShangGuan
    Chai, Linguo
    TRANSPORTATION SAFETY AND ENVIRONMENT, 2022, 4 (04)
  • [38] A Deep Reinforcement Learning Approach to Traffic Signal Control With Temporal Traffic Pattern Mining
    Ma, Dongfang
    Zhou, Bin
    Song, Xiang
    Dai, Hanwen
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (08) : 11789 - 11800
  • [39] Multi-agent deep reinforcement learning with traffic flow for traffic signal control
    Hou, Liang
    Huang, Dailin
    Cao, Jie
    Ma, Jialin
    JOURNAL OF CONTROL AND DECISION, 2025, 12 (01) : 81 - 92
  • [40] Traffic signal control in mixed traffic environment based on advance decision and reinforcement learning
    Yu Du
    WeiShang Guan
    Linguo Chai
    Transportation Safety and Environment, 2022, 4 (04) : 62 - 72