Station keeping control for aerostat in wind fields based on deep reinforcement learning

被引:0
|
作者
Bai, Fangchao [1 ]
Yang, Xixiang [1 ]
Deng, Xiaolong [1 ]
Hou, Zhongxi [1 ]
机构
[1] College of Aerospace Science and Engineering, National University of Defense Technology, Changsha,410073, China
关键词
Aerodynamics - Airships - Deep learning - Learning algorithms - Markov processes;
D O I
10.13700/j.bh.1001-5965.2022.0629
中图分类号
学科分类号
摘要
In this paper, a stratospheric aerostat station keeping model is established. Based on Markov decision process, Double Deep Q-learning with prioritized experience replay is applied to stratospheric aerostat station keeping control under dynamic and non-dynamic conditions. Ultimately, metrics like the average station keeping radius and the station keeping effective time ratio are used to assess the effectiveness of the station keeping control approach. The simulation analysis results show that: under the mission the station keeping radius is 50 km and the station keeping time is three days, in the case of no power propulsion, the average station keeping radius of the stratospheric aerostat is 28.16 km, the station keeping effective time ratio is 83%. In the case of powered propulsion, the average station keeping radius of the stratospheric aerostat is significantly increased. The powered stratospheric aerostat can achieve flight control with a station keeping radius of 20 km, an average station keeping radius of 8.84 km, and a station keeping effective time ratio of 100%. © 2024 Beijing University of Aeronautics and Astronautics (BUAA). All rights reserved.
引用
收藏
页码:2354 / 2366
相关论文
共 50 条
  • [21] Manipulator Control Method Based on Deep Reinforcement Learning
    Zeng, Rui
    Liu, Manlu
    Zhang, Junjun
    Li, Xinmao
    Zhou, Qijie
    Jiang, Yuanchen
    PROCEEDINGS OF THE 32ND 2020 CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2020), 2020, : 415 - 420
  • [22] Intelligent Control of Manipulator Based on Deep Reinforcement Learning
    Zhou, Jiangtao
    Zheng, Hua
    Zhao, Dongzhu
    Chen, Yingxue
    2021 12TH INTERNATIONAL CONFERENCE ON MECHANICAL AND AEROSPACE ENGINEERING (ICMAE), 2021, : 275 - 279
  • [23] Aircraft Control Method Based on Deep Reinforcement Learning
    Zhen, Yan
    Hao, Mingrui
    PROCEEDINGS OF 2020 IEEE 9TH DATA DRIVEN CONTROL AND LEARNING SYSTEMS CONFERENCE (DDCLS'20), 2020, : 912 - 917
  • [24] Deep reinforcement learning based voltage control revisited
    Nematshahi, Saeed
    Shi, Di
    Wang, Fengyu
    Yan, Bing
    Nair, Adithya
    IET GENERATION TRANSMISSION & DISTRIBUTION, 2023, 17 (21) : 4826 - 4835
  • [25] Missile Attitude Control Based on Deep Reinforcement Learning
    Li, Bohao
    Ma, Fei
    Wu, Yunjie
    2020 IEEE 16TH INTERNATIONAL CONFERENCE ON CONTROL & AUTOMATION (ICCA), 2020, : 931 - 936
  • [26] Optimal control of a wind farm in time-varying wind using deep reinforcement learning
    Kim, Taewan
    Kim, Changwook
    Song, Jeonghwan
    You, Donghyun
    ENERGY, 2024, 303
  • [27] A Control Strategy Based on Deep Reinforcement Learning Under the Combined Wind-Solar Storage System
    Huang, Shiying
    Yang, Ming
    Zhang, Changhang
    Yun, Jiangyang
    Gao, Yuan
    Li, Peng
    2020 IEEE STUDENT CONFERENCE ON ELECTRIC MACHINES AND SYSTEMS (SCEMS 2020), 2020, : 819 - 824
  • [28] A Control Strategy Based on Deep Reinforcement Learning Under the Combined Wind-Solar Storage System
    Huang, Shiying
    Li, Peng
    Yang, Ming
    Gao, Yuan
    Yun, Jiangyang
    Zhang, Changhang
    IEEE TRANSACTIONS ON INDUSTRY APPLICATIONS, 2021, 57 (06) : 6547 - 6558
  • [29] Hybrid Energy Storage Control Based on Prediction and Deep Reinforcement Learning Compensation for Wind Power Smoothing
    Wang, Xin
    Zhou, Jianshu
    Qin, Bin
    2023 IEEE/IAS INDUSTRIAL AND COMMERCIAL POWER SYSTEM ASIA, I&CPS ASIA, 2023, : 1530 - 1535
  • [30] Wind Farm Power Generation Control Via Double-Network-Based Deep Reinforcement Learning
    Xie, Jingjie
    Dong, Hongyang
    Zhao, Xiaowei
    Karcanias, Aris
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (04) : 2321 - 2330