Real-time power optimization based on Q-learning algorithm for direct methanol fuel cell system

被引:1
|
作者
Chi, Xuncheng [1 ]
Chen, Fengxiang [1 ]
Zhai, Shuang [2 ]
Hu, Zhe [2 ]
Zhou, Su [3 ]
Wei, Wei [4 ]
机构
[1] Tongji Univ, Sch Automot Studies, Shanghai, Peoples R China
[2] Shanghai Refire Technol Co Ltd, Shanghai, Peoples R China
[3] Shanghai Zhongqiao Vocat & Tech Univ, Shanghai, Peoples R China
[4] CAS &M Zhangjiagang New Energy Technol Co Ltd, Zhangjiagang, Peoples R China
基金
中国国家自然科学基金;
关键词
Direct methanol fuel cell (DMFC) system; Real-time power optimization; Methanol supply control; Reinforcement learning; Q -learning algorithm; MASS-TRANSPORT MODEL; NUMERICAL-MODEL; PERFORMANCE; DMFC;
D O I
10.1016/j.ijhydene.2024.09.084
中图分类号
O64 [物理化学(理论化学)、化学物理学];
学科分类号
070304 ; 081704 ;
摘要
Efficient real-time power optimization of direct methanol fuel cell (DMFC) system is crucial for enhancing its performance and reliability. The power of DMFC system is mainly affected by stack temperature and circulating methanol concentration. However, the methanol concentration cannot be directly measured using reliable sensors, which poses a challenge for the real-time power optimization. To address this issue, this paper investigates the operating mechanism of DMFC system and establishes a system power model. Based on the established model, reinforcement learning using Q-learning algorithm is proposed to control methanol supply to optimize DMFC system power under varying operating conditions. This algorithm is simple, easy to implement, and does not rely on methanol concentration measurements. To validate the effectiveness of the proposed algorithm, simulation comparisons between the proposed method and the traditional perturbation and observation (P&O) algorithm are implemented under different operating conditions. The results show that proposed power optimization based on Q-learning algorithm improves net power by 1% and eliminates the fluctuation of methanol supply caused by P&O. For practical implementation considerations and real-time requirements of the algorithm, hardware-in-the-loop (HIL) experiments are conducted. The experiment results demonstrate that the proposed methods optimize net power under different operating conditions. Additionally, in terms of model accuracy, the experimental results are well matched with the simulation. Moreover, under varying load condition, compared with P&O, proposed power optimization based on Q-learning algorithm reduces root mean square error (RMSE) from 7.271% to 2.996% and mean absolute error (MAE) from 5.036% to 0.331%.
引用
收藏
页码:1241 / 1253
页数:13
相关论文
共 50 条
  • [31] A direct methanol fuel cell system to power a humanoid robot
    Joh, Han-Ik
    Ha, Tae Jung
    Hwang, Sang Youp
    Kim, Jong-Ho
    Chae, Seung-Hoon
    Cho, Jae Hyung
    Prabhuram, Joghee
    Kim, Soo-Kil
    Lim, Tae-Hoon
    Cho, Baek-Kyu
    Oh, Jun-Ho
    Moon, Sang Heup
    Ha, Heung Yong
    JOURNAL OF POWER SOURCES, 2010, 195 (01) : 293 - 298
  • [32] Indoor Emergency Path Planning Based on the Q-Learning Optimization Algorithm
    Xu, Shenghua
    Gu, Yang
    Li, Xiaoyan
    Chen, Cai
    Hu, Yingyi
    Sang, Yu
    Jiang, Wenxing
    ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION, 2022, 11 (01)
  • [33] A Q-Learning Based Energy Threshold Optimization Algorithm in LAA Networks
    Pei, Errong
    Zhou, Lineng
    Deng, Bingguang
    Lu, Xun
    Li, Yun
    Zhang, Zhizhong
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (07) : 7037 - 7049
  • [34] Real-time Optimal Planning for Redirected Walking Using Deep Q-Learning
    Lee, Dong-Yong
    Cho, Yong-Hun
    Lee, In-Kwon
    2019 26TH IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES (VR), 2019, : 63 - 71
  • [35] Q-learning and ACO hybridisation for real-time scheduling on heterogeneous distributed architectures
    Hajoui, Younes
    Bouattane, Omar
    Youssfi, Mohamed
    Illoussamen, El Houssein
    INTERNATIONAL JOURNAL OF COMPUTATIONAL SCIENCE AND ENGINEERING, 2019, 20 (02) : 225 - 239
  • [36] Dynamic Obstacle Avoidance of Mobile Robots Using Real-Time Q-learning
    Kim, HoWon
    Lee, WonChang
    2022 INTERNATIONAL CONFERENCE ON ELECTRONICS, INFORMATION, AND COMMUNICATION (ICEIC), 2022,
  • [37] A Real-Time Optimization of Reactive Power for An Intelligent System Using Genetic Algorithm
    Abdelhady, Suzan
    Osama, Ahmed
    Shaban, Ahmed
    Elbayoumi, Mahmoud
    IEEE ACCESS, 2020, 8 : 11991 - 12000
  • [38] Optimization of Electrical System Topology for Offshore Wind Farm Based on Q-learning Particle Swarm Optimization Algorithm
    Qi Y.
    Hou P.
    Jin R.
    Dianli Xitong Zidonghua/Automation of Electric Power Systems, 2021, 45 (21): : 66 - 75
  • [39] A novel Q-learning algorithm based on improved whale optimization algorithm for path planning
    Li, Ying
    Wang, Hanyu
    Fan, Jiahao
    Geng, Yanyu
    PLOS ONE, 2022, 17 (12):
  • [40] Dueling Double Q-learning based Real-time Energy Dispatch in Grid-connected Microgrids
    Shu, Yuankai
    Bi, Wenzheng
    Dong, Wei
    Yang, Qiang
    2020 19TH INTERNATIONAL SYMPOSIUM ON DISTRIBUTED COMPUTING AND APPLICATIONS FOR BUSINESS ENGINEERING AND SCIENCE (DCABES 2020), 2020, : 42 - 45