ERL: Edge based Reinforcement Learning for optimized urban Traffic light control

被引:0
|
作者
Zhou, Pengyuan [1 ]
Braud, Tristan [2 ]
Alhilal, Ahmad [2 ]
Hui, Pan [1 ,2 ]
Kangasharju, Jussi [1 ]
机构
[1] Univ Helsinki, Dept Comp Sci, Helsinki, Finland
[2] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China
关键词
INTELLIGENT TRANSPORTATION SYSTEMS;
D O I
10.1109/percomw.2019.8730706
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Traffic congestion is worsening in every major city and brings increasing costs to governments and drivers. Vehicular networks provide the ability to collect more data from vehicles and roadside units, and sense traffic in real time. They represent a promising solution to alleviate traffic jams in urban environments. However, while the collected information is valuable, an efficient solution for better and faster utilization to alleviate congestion has yet to be developed. Current solutions are either based on mathematical models, which do not account for complex traffic scenarios or small-scale machine learning algorithms. In this paper, we propose ERL, a solution based on Edge Computing nodes to collect traffic data. ERL alleviates congestion by providing intelligent optimized traffic light control in real time. Edge servers run fast reinforcement learning algorithms to tune the metrics of the traffic signal control algorithm ran for each intersection. ERL operates within the coverage area of the edge server, and uses aggregated data from neighboring edge servers to provide city-scale congestion control. The evaluation based on real map data shows that our system decreases 48.71 % average waiting time and 32.77% trip duration in normally congested areas, with very fast training in ordinary servers.
引用
收藏
页码:849 / 854
页数:6
相关论文
共 50 条
  • [21] Fairness Control of Traffic Light via Deep Reinforcement Learning
    Li, Chenghao
    Ma, Xiaoteng
    Xia, Li
    Zhao, Qianchuan
    Yang, Jun
    2020 IEEE 16TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2020, : 652 - 658
  • [22] Reinforcement learning for traffic light control with emphasis on emergency vehicles
    Mahboubeh Shamsi
    Abdolreza Rasouli Kenari
    Roghayeh Aghamohammadi
    The Journal of Supercomputing, 2022, 78 : 4911 - 4937
  • [23] A distributed deep reinforcement learning method for traffic light control
    Liu, Bo
    Ding, Zhengtao
    NEUROCOMPUTING, 2022, 490 : 390 - 399
  • [24] Reinforcement learning for traffic light control with emphasis on emergency vehicles
    Shamsi, Mahboubeh
    Kenari, Abdolreza Rasouli
    Aghamohammadi, Roghayeh
    JOURNAL OF SUPERCOMPUTING, 2022, 78 (04): : 4911 - 4937
  • [25] Deep Reinforcement Learning for Addressing Disruptions in Traffic Light Control
    Rasheed, Faizan
    Yau, Kok-Lim Alvin
    Noor, Rafidah Md
    Chong, Yung-Wey
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 71 (02): : 2225 - 2247
  • [26] IntelliLight: A Reinforcement Learning Approach for Intelligent Traffic Light Control
    Wei, Hua
    Zheng, Guanjie
    Yao, Huaxiu
    Li, Zhenhui
    KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 2496 - 2505
  • [27] A Deep Reinforcement Learning Network for Traffic Light Cycle Control
    Liang, Xiaoyuan
    Du, Xunsheng
    Wang, Guiling
    Han, Zhu
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2019, 68 (02) : 1243 - 1253
  • [28] DDPGAT: Integrating MADDPG and GAT for optimized urban traffic light control
    Azad-Manjiri, Meisam
    Afsharchi, Mohsen
    Abdoos, Monireh
    IET INTELLIGENT TRANSPORT SYSTEMS, 2025, 19 (01)
  • [29] Adaptive traffic light control based on reinforcement learning under different stages of autonomy
    Xu, Zhuohang
    Zhang, Libin
    Qi, Fan
    2023 35TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2023, : 715 - 720
  • [30] Optimized traffic flow prediction based on cluster formation and reinforcement learning
    Rajkumar, S. C.
    Deborah, L. Jegatha
    Vijayakumar, P.
    INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, 2023, 36 (12)