ERL: Edge based Reinforcement Learning for optimized urban Traffic light control

被引:0
|
作者
Zhou, Pengyuan [1 ]
Braud, Tristan [2 ]
Alhilal, Ahmad [2 ]
Hui, Pan [1 ,2 ]
Kangasharju, Jussi [1 ]
机构
[1] Univ Helsinki, Dept Comp Sci, Helsinki, Finland
[2] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China
关键词
INTELLIGENT TRANSPORTATION SYSTEMS;
D O I
10.1109/percomw.2019.8730706
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Traffic congestion is worsening in every major city and brings increasing costs to governments and drivers. Vehicular networks provide the ability to collect more data from vehicles and roadside units, and sense traffic in real time. They represent a promising solution to alleviate traffic jams in urban environments. However, while the collected information is valuable, an efficient solution for better and faster utilization to alleviate congestion has yet to be developed. Current solutions are either based on mathematical models, which do not account for complex traffic scenarios or small-scale machine learning algorithms. In this paper, we propose ERL, a solution based on Edge Computing nodes to collect traffic data. ERL alleviates congestion by providing intelligent optimized traffic light control in real time. Edge servers run fast reinforcement learning algorithms to tune the metrics of the traffic signal control algorithm ran for each intersection. ERL operates within the coverage area of the edge server, and uses aggregated data from neighboring edge servers to provide city-scale congestion control. The evaluation based on real map data shows that our system decreases 48.71 % average waiting time and 32.77% trip duration in normally congested areas, with very fast training in ordinary servers.
引用
收藏
页码:849 / 854
页数:6
相关论文
共 50 条
  • [41] A traffic light control method based on multi-agent deep reinforcement learning algorithm
    Liu, Dongjiang
    Li, Leixiao
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [42] Comparison of game theoretical strategy and reinforcement learning in traffic light control
    Guo J.
    Harmati I.
    Periodica Polytechnica Transportation Engineering, 2020, 48 (04): : 313 - 319
  • [43] Traffic Light Control Using Hierarchical Reinforcement Learning and Options Framework
    Borges, Dimitrius F.
    Leite, Joao Paulo R. R.
    Moreira, Edmilson M.
    Carpinteiro, Otavio A. S.
    IEEE ACCESS, 2021, 9 : 99155 - 99165
  • [44] Adaptive Broad Deep Reinforcement Learning for Intelligent Traffic Light Control
    Zhu, Ruijie
    Wu, Shuning
    Li, Lulu
    Ding, Wenting
    Lv, Ping
    Sui, Luyao
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (17): : 28496 - 28507
  • [45] A traffic light control method based on multi-agent deep reinforcement learning algorithm
    Dongjiang Liu
    Leixiao Li
    Scientific Reports, 13
  • [46] Application of Evolutionary Reinforcement Learning (ERL) Approach in Control Domain: A Review
    Goyal, Parul
    Malik, Hasmat
    Sharma, Rajneesh
    SMART INNOVATIONS IN COMMUNICATION AND COMPUTATIONAL SCIENCES, VOL 2, 2019, 670 : 273 - 288
  • [47] Reinforcement learning in urban network traffic signal control: A systematic literature review
    Noaeen, Mohammad
    Naik, Atharva
    Goodman, Liana
    Crebo, Jared
    Abrar, Taimoor
    Abad, Zahra Shakeri Hossein
    Bazzan, Ana L. C.
    Far, Behrouz
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 199
  • [48] Graph cooperation deep reinforcement learning for ecological urban traffic signal control
    Yan, Liping
    Zhu, Lulong
    Song, Kai
    Yuan, Zhaohui
    Yan, Yunjuan
    Tang, Yue
    Peng, Chan
    APPLIED INTELLIGENCE, 2023, 53 (06) : 6248 - 6265
  • [49] Graph cooperation deep reinforcement learning for ecological urban traffic signal control
    Liping Yan
    Lulong Zhu
    Kai Song
    Zhaohui Yuan
    Yunjuan Yan
    Yue Tang
    Chan Peng
    Applied Intelligence, 2023, 53 : 6248 - 6265
  • [50] Adaptive Traffic Light Control With Deep Reinforcement Learning: An Evaluation of Traffic Flow and Energy Consumption
    Koch, Lucas
    Brinkmann, Tobias
    Wegener, Marius
    Badalian, Kevin
    Andert, Jakob
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (12) : 15066 - 15076