User-Centric Green Light Optimized Speed Advisory with Reinforcement Learning

被引:0
|
作者
Schlamp, Anna-Lena [1 ]
Gerner, Jeremias [1 ]
Bogenberger, Klaus [2 ]
Schmidtner, Stefanie [1 ]
机构
[1] Tech Hsch Ingolstadt, Dept Elect Engn & AImot Bavaria, D-85049 Ingolstadt, Germany
[2] Tech Univ Munich, Sch Engn & Design, D-80333 Munich, Germany
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We address Green Light Optimized Speed Advisory (GLOSA), an application in the field of Intelligent Transportation Systems (ITS) for improving traffic flow and reducing emissions in urban areas. The aim of this study is to improve GLOSA, both by including traffic condition information, more specifically queue length, into the calculation of an optimal speed as well as by applying Reinforcement Learning (RL). We incorporate rule-based classic GLOSA and RL-based GLOSA in a common comparable simulation environment. In doing so, performance is also examined considering action frequency in order to create a user-centric GLOSA system for settings of non-automated driving. Results show that incorporating queue information positively influences the performance of both, RLagents and classic GLOSA systems. Both algorithms achieve the best results at the lowest investigated action frequency of an update every second. As the frequency decreases, the improvement compared to the baseline without any GLOSA diminishes. However, the decline is more pronounced for the RL-agent, so the classic GLOSA algorithm delivers better results on average when the action frequency reaches five seconds. We make the source code of this work available under: github.com/urbanAIthi/GLOSA_RL.
引用
收藏
页码:3463 / 3470
页数:8
相关论文
共 50 条
  • [1] Decentralized Federated Reinforcement Learning for User-Centric Dynamic TFDD Control
    Yin, Ziyan
    Wang, Zhe
    Li, Jun
    Ding, Ming
    Chen, Wen
    Jin, Shi
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2023, 17 (01) : 40 - 53
  • [2] Adaptive Frequency Green Light Optimal Speed Advisory Based on Deep Reinforcement Learning
    Xu, Ming
    Zuo, Dongyu
    Li, Jinye
    JOURNAL OF TRANSPORTATION ENGINEERING PART A-SYSTEMS, 2024, 150 (10)
  • [3] User-Centric Visible Light Communication
    Horunlu, Kardelen
    Celik, Yasin
    32ND IEEE SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU 2024, 2024,
  • [4] Deep Reinforcement Learning Based Resource Allocation for URLLC User-Centric Network
    Hu, Fajin
    Zhao, Junhui
    Liao, Jieyu
    Zhang, Huan
    2022 14TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING, WCSP, 2022, : 522 - 526
  • [5] Intelligent User-Centric Network Selection: A Model-Driven reinforcement Learning framework
    Wang, Xinwei
    Li, Jiandong
    Wang, Lingxia
    Yang, Chungang
    Han, Zhu
    IEEE ACCESS, 2019, 7 : 21645 - 21661
  • [6] User-Centric Resource Allocation in FD-RAN: A Stepwise Reinforcement Learning Approach
    Chen, Jiacheng
    Liu, Jingbo
    Zhou, Haibo
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (13): : 24210 - 24221
  • [7] User-centric portals for managed learning environments
    Ling, B
    Allison, C
    13TH INTERNATIONAL WORKSHOP ON DATABASE AND EXPERT SYSTEMS APPLICATIONS, PROCEEDINGS, 2002, : 399 - 405
  • [8] Learning user purchase intent from user-centric data
    Lukose, Rajan
    Li, Jiye
    Zhou, Jing
    Penmetsa, Satyanarayana Raju
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PROCEEDINGS, 2008, 5012 : 673 - +
  • [9] User Association and Power Allocation for User-Centric Smart-Duplex Networks via Deep Reinforcement Learning
    Wang, Dan
    Li, Ran
    Huang, Chuan
    Xu, Xiaodong
    Chen, Hao
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 2810 - 2815
  • [10] User-Centric Clustering in Cell-Free MIMO Networks using Deep Reinforcement Learning
    Mendoza, Charmae Franchesca
    Schwarz, Stefan
    Rupp, Markus
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 1036 - 1041