ERL: Edge based Reinforcement Learning for optimized urban Traffic light control

被引:0
|
作者
Zhou, Pengyuan [1 ]
Braud, Tristan [2 ]
Alhilal, Ahmad [2 ]
Hui, Pan [1 ,2 ]
Kangasharju, Jussi [1 ]
机构
[1] Univ Helsinki, Dept Comp Sci, Helsinki, Finland
[2] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China
关键词
INTELLIGENT TRANSPORTATION SYSTEMS;
D O I
10.1109/percomw.2019.8730706
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Traffic congestion is worsening in every major city and brings increasing costs to governments and drivers. Vehicular networks provide the ability to collect more data from vehicles and roadside units, and sense traffic in real time. They represent a promising solution to alleviate traffic jams in urban environments. However, while the collected information is valuable, an efficient solution for better and faster utilization to alleviate congestion has yet to be developed. Current solutions are either based on mathematical models, which do not account for complex traffic scenarios or small-scale machine learning algorithms. In this paper, we propose ERL, a solution based on Edge Computing nodes to collect traffic data. ERL alleviates congestion by providing intelligent optimized traffic light control in real time. Edge servers run fast reinforcement learning algorithms to tune the metrics of the traffic signal control algorithm ran for each intersection. ERL operates within the coverage area of the edge server, and uses aggregated data from neighboring edge servers to provide city-scale congestion control. The evaluation based on real map data shows that our system decreases 48.71 % average waiting time and 32.77% trip duration in normally congested areas, with very fast training in ordinary servers.
引用
收藏
页码:849 / 854
页数:6
相关论文
共 50 条
  • [1] Intelligent Control of Urban Intersection Traffic Light Based on Reinforcement Learning Algorithm
    Raeisi, Moein
    Mahboob, Amir Soltany
    2021 26TH INTERNATIONAL COMPUTER CONFERENCE, COMPUTER SOCIETY OF IRAN (CSICC), 2021,
  • [2] DRLE: Decentralized Reinforcement Learning at the Edge for Traffic Light Control in the IoV
    Zhou, Pengyuan
    Chen, Xianfu
    Liu, Zhi
    Braud, Tristan
    Hui, Pan
    Kangasharju, Jussi
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 22 (04) : 2262 - 2273
  • [3] Adaptive Traffic Signal Control for Urban Corridor Based on Reinforcement Learning
    Liu, Lishan
    Zhuang, Xiya
    Li, Qiang
    COMPUTATIONAL AND EXPERIMENTAL SIMULATIONS IN ENGINEERING, ICCES 2024-VOL 2, 2025, 173 : 25 - 35
  • [4] Traffic Light Control with Policy Gradient-Based Reinforcement Learning
    Tas, Mehmet Bilge Han
    Ozkan, Kemal
    Saricicek, Inci
    Yazici, Ahmet
    32ND IEEE SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU 2024, 2024,
  • [5] Urban Traffic Signal Control at the Edge: An Ontology-Enhanced Deep Reinforcement Learning Approach
    Guo, Jiaying
    Ghanadbashi, Saeedeh
    Wang, Shen
    Golpayegani, Fatemeh
    2023 IEEE 26TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, ITSC, 2023, : 6027 - 6033
  • [6] Enhanced Multiagent Multi-Objective Reinforcement Learning for Urban Traffic Light Control
    Khamis, Mohamed A.
    Gomaa, Walid
    2012 11TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2012), VOL 1, 2012, : 586 - 591
  • [7] Adaptive urban traffic signal control based on enhanced deep reinforcement learning
    Changjian Cai
    Min Wei
    Scientific Reports, 14 (1)
  • [8] Adaptive urban traffic signal control based on enhanced deep reinforcement learning
    Cai, Changjian
    Wei, Min
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [9] Deep Reinforcement Learning for Autonomous Traffic Light Control
    Garg, Deepeka
    Chli, Maria
    Vogiatzis, George
    2018 3RD IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION ENGINEERING (ICITE), 2018, : 214 - 218
  • [10] RELight: a random ensemble reinforcement learning based method for traffic light control
    Jianbin Huang
    Qinglin Tan
    Ruijie Qi
    He Li
    Applied Intelligence, 2024, 54 : 95 - 112