Dynamic traffic signal control for heterogeneous traffic conditions using Max Pressure and Reinforcement Learning

被引:2
|
作者
Agarwal, Amit [1 ]
Sahu, Deorishabh [1 ]
Mohata, Rishabh [1 ]
Jeengar, Kuldeep [2 ]
Nautiyal, Anuj [1 ]
Saxena, Dhish Kumar [2 ]
机构
[1] Indian Inst Technol Roorkee, Dept Civil Engn, Haridwar 247667, Uttaranchal, India
[2] Indian Inst Technol Roorkee, Dept Mech & Ind Engn, Roorkee 247667, Uttaranchal, India
关键词
Adaptive Traffic Signal Control; Max Pressure; Mixed traffic; Reinforcement Learning; SATURATION FLOW; REAL-TIME; SYSTEM;
D O I
10.1016/j.eswa.2024.124416
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Optimization of green signal timing for each phase at a signalized intersection in an urban area is critical for efficacious traffic management and congestion mitigation. Many algorithms are developed, yet very few are for cities in developing nations where traffic is characterized by its heterogeneous nature. While some of the recent studies have explored different variants of Max Pressure (MP) and Reinforcement Learning (RL) for optimizing phase timing, the focus is limited to homogeneous traffic conditions. In developing nations, such as India, control systems, like fixed and actuated, are still predominantly used in practice. Composite Signal Control Strategy (CoSiCoSt) is also employed at a few intersections. However, there is a notable absence of advanced models addressing heterogeneous traffic behavior, which have a great potential to reduce delays and queue lengths. The present study proposes a hybrid algorithm for an adaptive traffic control system for realworld heterogeneous traffic conditions. The proposed algorithm integrates Max Pressure with Reinforcement Learning. The former dynamically determines phase order by performing pressure calculations for each phase. The latter optimizes the phase timings for each phase to minimize delays and queue lengths using the proximal policy optimization algorithm. In contrast to the past RL models, in which the phase timing is determined for all phases at once, in the proposed algorithm, the phase timings are determined after the execution of every phase. To assess the impact, classified traffic volume is extracted from surveillance videos of an intersection in Ludhiana, Punjab and simulated using Simulation of Urban Mobility (SUMO). Traffic volume data is collected over three distinct time periods of the day. The results of the proposed algorithm are compared with benchmark algorithms, such as Actuated, CoSiCoSt, and acyclic & cyclic Max Pressure, Reinforcement Learning -based algorithms. To assess the performance, queue length, delay, and queue dissipation time are considered as key performance indicators. Of actuated and CoSiCoSt, the latter performs better, and thus, the performance of the proposed hybrid algorithm is compared with CoSiCoSt. The proposed algorithm effectively reduces total delay and queue dissipation time in the range of 77.07%-87.66% and 53.95%-62.07%, respectively. Similarly, with respect to the best -performing RL model, the drop in delay and queue dissipation time range from 55.63 to 77.12% and 22.13 to 43.7%, respectively, which is significant at 99% confidence level. The proposed algorithm is deployed on a wireless hardware architecture to confirm the feasibility of real -world implementation. The findings highlight the algorithm's potential as an efficient solution for queues and delays at signalized intersections, where mixed traffic conditions prevail.
引用
收藏
页数:22
相关论文
共 50 条
  • [1] Intelligent traffic signal controller for heterogeneous traffic using reinforcement learning
    Savithramma, R. M.
    Sumathi, R.
    GREEN ENERGY AND INTELLIGENT TRANSPORTATION, 2023, 2 (06):
  • [2] Traffic Signal Control Using Reinforcement Learning
    Jadhao, Namrata S.
    Jadhao, Ashish S.
    2014 FOURTH INTERNATIONAL CONFERENCE ON COMMUNICATION SYSTEMS AND NETWORK TECHNOLOGIES (CSNT), 2014, : 1130 - 1135
  • [3] Training Reinforcement Learning Agent for Traffic Signal Control under Different Traffic Conditions
    Zeng, Jinghong
    Hu, Jianming
    Zhang, Yi
    2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 4248 - 4254
  • [4] Traffic Signal Control using Reinforcement Learning and the Max-Plus Algorithm as a Coordinating Strategy
    Medina, Juan C.
    Benekohal, Rahim F.
    2012 15TH INTERNATIONAL IEEE CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2012, : 596 - 601
  • [5] A stage pressure-based adaptive traffic signal control using reinforcement learning
    Hu, Fuyu
    Huang, Wei
    INTERNATIONAL CONFERENCE ON INTELLIGENT TRAFFIC SYSTEMS AND SMART CITY (ITSSC 2021), 2022, 12165
  • [6] Traffic signal control for smart cities using reinforcement learning
    Joo, Hyunjin
    Ahmed, Syed Hassan
    Lim, Yujin
    COMPUTER COMMUNICATIONS, 2020, 154 : 324 - 330
  • [7] Urban traffic signal control using reinforcement learning agents
    Balaji, P. G.
    German, X.
    Srinivasan, D.
    IET INTELLIGENT TRANSPORT SYSTEMS, 2010, 4 (03) : 177 - 188
  • [8] Traffic Signal Control for An Isolated Intersection Using Reinforcement Learning
    Maiti, Nandan
    Chilukuri, Bhargava Rama
    2021 INTERNATIONAL CONFERENCE ON COMMUNICATION SYSTEMS & NETWORKS (COMSNETS), 2021, : 629 - 633
  • [9] Parallel Reinforcement Learning for Traffic Signal Control
    Mannion, Patrick
    Duggan, Jim
    Howley, Enda
    6TH INTERNATIONAL CONFERENCE ON AMBIENT SYSTEMS, NETWORKS AND TECHNOLOGIES (ANT-2015), THE 5TH INTERNATIONAL CONFERENCE ON SUSTAINABLE ENERGY INFORMATION TECHNOLOGY (SEIT-2015), 2015, 52 : 956 - 961
  • [10] Reinforcement Learning with Explainability for Traffic Signal Control
    Rizzo, Stefano Giovanni
    Vantini, Giovanna
    Chawla, Sanjay
    2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 3567 - 3572