Enhancing Urban Traffic Management in Taipei: A Reinforcement Learning Approach

被引:0
|
作者
William, Ivander [1 ]
Kozhevnikov, Sergei [2 ]
Sontheimer, Moritz [1 ]
Chou, Shou-Yan [1 ]
机构
[1] Natl Taiwan Univ Sci & Technol, Taipei, Taiwan
[2] Czech Tech Univ, Czech Inst Informat Robot & Cybernet, Prague, Czech Republic
关键词
Traffic Management; Transport Simulation Models; Simulation Environment; Reinforcement Learning; Q-Learning; SIGNAL CONTROL; OPTIMIZATION; TIME;
D O I
10.1109/SCSP61506.2024.10552687
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
the study explores the application of reinforcement learning (RL) algorithms and capabilities of the Simulation of Urban Mobility (SUMO) solution to enhance urban traffic management in Taipei. Focusing on two major intersections this research employs Q-learning, a model-free RL algorithm, to optimize traffic signal timings based on real-time transport conditions. The methodology encompasses the vehicles real data collection, as well as traffic light phases, and simulation within the SUMO framework to model urban traffic scenarios. The findings reveal significant improvements in traffic throughput and reductions in trip durations during both peak and non-peak hours, demonstrating the potential of RL algorithms to enhance traffic flow efficiency. The study highlights the algorithm's effectiveness in reducing CO2 emissions, contributing to environmental sustainability goals. The results of the project underscore the importance of adopting advanced computational models in urban traffic management, offering insights into the development of smarter and more sustainable transportation systems.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Dynamic urban traffic rerouting with fog-cloud reinforcement learning
    Du, Runjia
    Chen, Sikai
    Dong, Jiqian
    Chen, Tiantian
    Fu, Xiaowen
    Labi, Samuel
    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2024, 39 (06) : 793 - 813
  • [22] Adaptive Traffic Signal Control for Urban Corridor Based on Reinforcement Learning
    Liu, Lishan
    Zhuang, Xiya
    Li, Qiang
    COMPUTATIONAL AND EXPERIMENTAL SIMULATIONS IN ENGINEERING, ICCES 2024-VOL 2, 2025, 173 : 25 - 35
  • [23] Urban Traffic Signal Control with Reinforcement Learning from Demonstration Data
    Wang, Min
    Wu, Libing
    Li, Jianxin
    Wu, Dan
    Ma, Chao
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [24] Hierarchical multiagent reinforcement learning schemes for air traffic management
    Christos Spatharis
    Alevizos Bastas
    Theocharis Kravaris
    Konstantinos Blekas
    George A. Vouros
    Jose Manuel Cordero
    Neural Computing and Applications, 2023, 35 : 147 - 159
  • [25] Hierarchical multiagent reinforcement learning schemes for air traffic management
    Spatharis, Christos
    Bastas, Alevizos
    Kravaris, Theocharis
    Blekas, Konstantinos
    Vouros, George A.
    Manuel Cordero, Jose
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (01): : 147 - 159
  • [26] Collaborative multiagent reinforcement learning schemes for air traffic management
    Spatharis, Chistos
    Blekas, Konstantinos
    Bastas, Alevizos
    Kravaris, Theocharis
    Vouros, George A.
    2019 10TH INTERNATIONAL CONFERENCE ON INFORMATION, INTELLIGENCE, SYSTEMS AND APPLICATIONS (IISA), 2019, : 357 - 364
  • [27] Designing Traffic Management Strategies Using Reinforcement Learning Techniques
    Taylor C.
    Vargo E.
    Bromberg E.
    Manderfield T.
    Journal of Air Transportation, 2023, 31 (04): : 199 - 212
  • [28] EGLight: enhancing deep reinforcement learning with expert guidance for traffic signal control
    Zhang, Meng
    Wang, Dianhai
    Cai, Zhengyi
    Huang, Yulang
    Yu, Hongxin
    Qin, Hanwu
    Zeng, Jiaqi
    TRANSPORTMETRICA A-TRANSPORT SCIENCE, 2025,
  • [29] A Deep Reinforcement Learning Approach for Fair Traffic Signal Control
    Raeis, Majid
    Leon-Garcia, Alberto
    2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, : 2512 - 2518
  • [30] Autonomous driving in the uncertain traffic——a deep reinforcement learning approach
    Yang Shun
    Wu Jian
    Zhang Sumin
    Han Wei
    The Journal of China Universities of Posts and Telecommunications, 2018, 25 (06) : 21 - 30