Improving Fuel Economy with LSTM Networks and Reinforcement Learning

被引:6
|
作者
Bougiouklis, Andreas [1 ]
Korkofigkas, Antonis [1 ]
Stamou, Giorgos [1 ]
机构
[1] Natl Tech Univ Athens, Athens, Greece
关键词
Trajectory optimization; Velocity profile; Racing line; Topographical data; Electric vehicle; LEV; Neural network; LSTM; Reinforcement learning; Q-Learning; CRUISE;
D O I
10.1007/978-3-030-01421-6_23
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a system for calculating the optimum velocities and trajectories of an electric vehicle for a specific route. Our objective is to minimize the consumption over a trip without impacting the overall trip time. The system uses a particular segmentation of the route and involves a three-step procedure. In the first step, a neural network is trained on telemetry data to model the consumption of the vehicle based on its velocity and the surface gradient. In the second step, two Q-learning algorithms compute the optimum velocities and the racing line in order to minimize the consumption. In the final step, the computed data is presented to the driver through an interactive application. This system was installed on a light electric vehicle (LEV) and by adopting the suggested driving strategy we reduced its consumption by 24.03% with respect to the classic constant-speed control technique.
引用
收藏
页码:230 / 239
页数:10
相关论文
共 50 条
  • [41] Enhancing the Fuel-Economy of V2I-Assisted Autonomous Driving: A Reinforcement Learning Approach
    Liu, Xiao
    Liu, Yuanwei
    Chen, Yue
    Hanzo, Lajos
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (08) : 8329 - 8342
  • [42] A Novel Machine Learning Model Using CNN-LSTM Parallel Networks for Predicting Ship Fuel Consumption
    Li, Xinyu
    Zuo, Yi
    Li, Tieshan
    Chen, C. L. Philip
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT II, 2024, 14448 : 108 - 118
  • [43] Improving Deep Reinforcement Learning With Mirror Loss
    Zhao, Jian
    Shu, Weide
    Zhao, Youpeng
    Zhou, Wengang
    Li, Houqiang
    IEEE TRANSACTIONS ON GAMES, 2023, 15 (03) : 337 - 347
  • [44] Improving wet clutch engagement with Reinforcement Learning
    Van Vaerenbergh, Kevin
    Rodriguez, Abdel
    Gagliolo, Matteo
    Vrancx, Peter
    Nowe, Ann
    Stoev, Julian
    Goossens, Stijn
    Pinte, Gregory
    Symens, Wim
    2012 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2012,
  • [45] Improving elevator performance using reinforcement learning
    Crites, RH
    Barto, AG
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 8: PROCEEDINGS OF THE 1995 CONFERENCE, 1996, 8 : 1017 - 1023
  • [46] IMPROVING FUEL-ECONOMY BY OPTIMIZED IN-CORE FUEL-MANAGEMENT
    LEFVERT, T
    KERNTECHNIK, 1988, 52 (04) : 228 - 233
  • [47] Improving Fuel Economy and Engine Performance through Gasoline Fuel Octane Rating
    Rodriguez-Fernandez, Jose
    Ramos, Angel
    Barba, Javier
    Cardenas, Dolores
    Delgado, Jesus
    ENERGIES, 2020, 13 (13)
  • [48] Improving the dynamics of quantum sensors with reinforcement learning
    Schuff, Jonas
    Fiderer, Lukas J.
    Braun, Daniel
    NEW JOURNAL OF PHYSICS, 2020, 22 (03):
  • [49] Reinforcement Learning for Improving Chemical Reaction Performance
    Hoque, Ajnabiul
    Surve, Mihir
    Kalyanakrishnan, Shivaram
    Sunoj, Raghavan B.
    Journal of the American Chemical Society, 2024,
  • [50] Improving Deep Reinforcement Learning with Knowledge Transfer
    Glatt, Ruben
    Reali Costa, Anna Helena
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 5036 - 5037