A Speedy Reinforcement Learning-Based Energy Management Strategy for Fuel Cell Hybrid Vehicles Considering Fuel Cell System Lifetime

被引:35
|
作者
Li, Wei [1 ,2 ]
Ye, Jiaye [1 ]
Cui, Yunduan [1 ]
Kim, Namwook [3 ]
Cha, Suk Won [4 ]
Zheng, Chunhua [1 ]
机构
[1] Chinese Acad Sci, Shenzhen Univ Town, Shenzhen Inst Adv Technol, 1068 Xueyuan Ave, Shenzhen 518055, Peoples R China
[2] Univ Chinese Acad Sci, 19 A Yuquan Rd, Beijing 100049, Peoples R China
[3] Hanyang Univ, Dept Mech Engn, 55 Hanyangdeahak Ro, Ansan 15588, Gyeonggi Do, South Korea
[4] Seoul Natl Univ, Sch Mech & Aerosp Engn, San 56-1, Seoul 151742, South Korea
关键词
Energy management strategy; Fuel cell hybrid vehicle; Lifetime enhancement; Pre-initialization; Speedy reinforcement learning; PONTRYAGINS MINIMUM PRINCIPLE; ELECTRIC VEHICLES; POWER MANAGEMENT; MODEL; PREDICTION;
D O I
10.1007/s40684-021-00379-8
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
A speedy reinforcement learning (RL)-based energy management strategy (EMS) is proposed for fuel cell hybrid vehicles (FCHVs) in this research, which approaches near-optimal results with a fast convergence rate based on a pre-initialization framework and meanwhile possesses the ability to extend the fuel cell system (FCS) lifetime. In the pre-initialization framework, well-designed power distribution-related rules are used to pre-initialize the Q-table of the RL algorithm to expedite its optimization process. Driving cycles are modeled as Markov processes and the FCS power difference between adjacent moments is used to evaluate the impact on the FCS lifetime in this research. The proposed RL-based EMS is trained on three driving cycles and validated on another driving cycle. Simulation results demonstrate that the average fuel consumption difference between the proposed EMS and the EMS based on dynamic programming is 5.59% on the training driving cycles and the validation driving cycle. Additionally, the power fluctuation on the FCS is reduced by at least 13% using the proposed EMS compared to the conventional RL-based EMS which does not consider the FCS lifetime. This is significantly beneficial for improving the FCS lifetime. Furthermore, compared to the conventional RL-based EMS, the convergence speed of the proposed EMS is increased by 69% with the pre-initialization framework, which presents the potential for real-time applications.
引用
收藏
页码:859 / 872
页数:14
相关论文
共 50 条
  • [21] A reinforcement learning-based energy management strategy for fuel cell electric vehicle considering coupled-energy sources degradations
    Huo, Weiwei
    Liu, Teng
    Lu, Bing
    SUSTAINABLE ENERGY GRIDS & NETWORKS, 2024, 40
  • [22] A collaborative energy management strategy based on multi-agent reinforcement learning for fuel cell hybrid electric vehicles
    Xiao, Yao
    Fu, Shengxiang
    Choi, Jongwoo
    Zheng, Chunhua
    2023 IEEE 98TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-FALL, 2023,
  • [23] Effective energy management strategy based on deep reinforcement learning for fuel cell hybrid vehicle considering multiple performance of integrated energy system
    Hu, Haoqin
    Lu, Chenlei
    Tan, Jiaqi
    Liu, Shengnan
    Xuan, Dongji
    INTERNATIONAL JOURNAL OF ENERGY RESEARCH, 2022, 46 (15) : 24254 - 24272
  • [24] Rule learning based energy management strategy of fuel cell hybrid vehicles considering multi-objective optimization
    Liu, Yonggang
    Liu, Junjun
    Zhang, Yuanjian
    Wu, Yitao
    Chen, Zheng
    Ye, Ming
    ENERGY, 2020, 207
  • [25] Energy Management Strategy Based on Reinforcement Learning and Frequency Decoupling for Fuel Cell Hybrid Powertrain
    Li, Hongzhe
    Kang, Jinsong
    Li, Cheng
    ENERGIES, 2024, 17 (08)
  • [26] Online energy management strategy of fuel cell hybrid electric vehicles based on rule learning
    Liu, Yonggang
    Liu, Junjun
    Qin, Datong
    Li, Guang
    Chen, Zheng
    Zhang, Yi
    JOURNAL OF CLEANER PRODUCTION, 2020, 260
  • [27] An online energy management strategy for fuel cell hybrid vehicles
    Zhang, Yu
    Chen, Ming
    Cai, Shuo
    Hou, Shengyan
    Yin, Hai
    Gao, Jinwu
    2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 6034 - 6039
  • [28] Deep Reinforcement Learning Based Energy Management Strategy for Fuel Cell and Battery Powered Rail Vehicles
    Deng, Kai
    Hai, Di
    Peng, Hujun
    Loewenstein, Lars
    Hameyer, Kay
    2021 IEEE VEHICLE POWER AND PROPULSION CONFERENCE (VPPC), 2021,
  • [29] Energy management strategy for fuel cell electric vehicles based on scalable reinforcement learning in novel environment
    Wang, Da
    Mei, Lei
    Xiao, Feng
    Song, Chuanxue
    Qi, Chunyang
    Song, Shixin
    INTERNATIONAL JOURNAL OF HYDROGEN ENERGY, 2024, 59 : 668 - 678
  • [30] Dyna algorithm-based reinforcement learning energy management for fuel cell hybrid engineering vehicles
    Liu, Huiying
    Yao, Yongming
    Li, Tianyu
    Du, Miaomiao
    Wang, Xiao
    Li, Haofa
    Li, Ming
    JOURNAL OF ENERGY STORAGE, 2024, 94