Optimization of On-Demand Shared Autonomous Vehicle Deployments Utilizing Reinforcement Learning

被引:4
|
作者
Meneses-Cime, Karina [1 ]
Guvenc, Bilin Aksun [1 ]
Guvenc, Levent [1 ]
机构
[1] Ohio State Univ, Automated Driving Lab, Columbus, OH 43210 USA
关键词
shared autonomous vehicles; mobility; traffic-in-the-loop simulation; optimization; reinforcement learning; SIMULATION; DESIGN;
D O I
10.3390/s22218317
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Ride-hailed shared autonomous vehicles (SAV) have emerged recently as an economically feasible way of introducing autonomous driving technologies while serving the mobility needs of under-served communities. There has also been corresponding research work on optimization of the operation of these SAVs. However, the current state-of-the-art research in this area treats very simple networks, neglecting the effect of a realistic other traffic representation, and is not useful for planning deployments of SAV service. In contrast, this paper utilizes a recent autonomous shuttle deployment site in Columbus, Ohio, as a basis for mobility studies and the optimization of SAV fleet deployment. Furthermore, this paper creates an SAV dispatcher based on reinforcement learning (RL) to minimize passenger wait time and to maximize the number of passengers served. The created taxi-dispatcher is then simulated in a realistic scenario while avoiding generalization or over-fitting to the area. It is found that an RL-aided taxi dispatcher algorithm can greatly improve the performance of a deployment of SAVs by increasing the overall number of trips completed and passengers served while decreasing the wait time for passengers.
引用
收藏
页数:21
相关论文
共 50 条
  • [31] Multi-Agent Reinforcement Learning for Autonomous On Demand Vehicles
    Boyali, Ali
    Hashimoto, Naohisa
    John, Vijay
    Acarman, Tankut
    2019 30TH IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV19), 2019, : 1461 - 1468
  • [32] Autonomous vehicle navigation by reinforcement learning with problem structure identification
    Naruse, K
    Leu, MC
    INTELLIGENT AUTONOMOUS SYSTEMS: IAS-5, 1998, : 226 - 233
  • [33] Docking Control of an Autonomous Underwater Vehicle Using Reinforcement Learning
    Anderlini, Enrico
    Parker, Gordon G.
    Thomas, Giles
    APPLIED SCIENCES-BASEL, 2019, 9 (17):
  • [34] Autonomous Vehicle Driving Path Control with Deep Reinforcement Learning
    Tiong, Teckchai
    Saad, Ismail
    Teo, Kenneth Tze Kin
    bin Lago, Herwansyah
    2023 IEEE 13TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE, CCWC, 2023, : 84 - 92
  • [35] Hierarchical Reinforcement Learning Method for Autonomous Vehicle Behavior Planning
    Qiao, Zhiqian
    Tyree, Zachariah
    Mudalige, Priyantha
    Schneider, Jeff
    Dolan, John M.
    2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 6084 - 6089
  • [36] Modular reinforcement learning for autonomous vehicle navigation in an unknown workspace
    Naruse, K
    Kakazu, Y
    Leu, MC
    1998 JAPAN-U.S.A. SYMPOSIUM ON FLEXIBLE AUTOMATION - PROCEEDINGS, VOLS I AND II, 1998, : 577 - 583
  • [37] Autonomous vehicle steering based on evaluative feedback by reinforcement learning
    Kuhnert, KD
    Krödel, M
    MACHINE LEARNING AND DATA MINING IN PATTERN RECOGNITION, PROCEEDINDS, 2005, 3587 : 405 - 414
  • [38] Reinforcement Learning Based Obstacle Avoidance for Autonomous Underwater Vehicle
    Prashant Bhopale
    Faruk Kazi
    Navdeep Singh
    Journal of Marine Science and Application, 2019, 18 : 228 - 238
  • [39] Reinforcement Learning for Autonomous Vehicle Movements in Wireless Sensor Networks
    Afifi, Haitham
    Ramaswamy, Arunselvan
    Karl, Holger
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [40] Autonomous Vehicle for Obstacle Detection and Avoidance Using Reinforcement Learning
    Arvind, C. S.
    Senthilnath, J.
    SOFT COMPUTING FOR PROBLEM SOLVING, SOCPROS 2018, VOL 1, 2020, 1048 : 55 - 66