Station keeping control for aerostat in wind fields based on deep reinforcement learning

被引:0
|
作者
Bai, Fangchao [1 ]
Yang, Xixiang [1 ]
Deng, Xiaolong [1 ]
Hou, Zhongxi [1 ]
机构
[1] College of Aerospace Science and Engineering, National University of Defense Technology, Changsha,410073, China
关键词
Aerodynamics - Airships - Deep learning - Learning algorithms - Markov processes;
D O I
10.13700/j.bh.1001-5965.2022.0629
中图分类号
学科分类号
摘要
In this paper, a stratospheric aerostat station keeping model is established. Based on Markov decision process, Double Deep Q-learning with prioritized experience replay is applied to stratospheric aerostat station keeping control under dynamic and non-dynamic conditions. Ultimately, metrics like the average station keeping radius and the station keeping effective time ratio are used to assess the effectiveness of the station keeping control approach. The simulation analysis results show that: under the mission the station keeping radius is 50 km and the station keeping time is three days, in the case of no power propulsion, the average station keeping radius of the stratospheric aerostat is 28.16 km, the station keeping effective time ratio is 83%. In the case of powered propulsion, the average station keeping radius of the stratospheric aerostat is significantly increased. The powered stratospheric aerostat can achieve flight control with a station keeping radius of 20 km, an average station keeping radius of 8.84 km, and a station keeping effective time ratio of 100%. © 2024 Beijing University of Aeronautics and Astronautics (BUAA). All rights reserved.
引用
收藏
页码:2354 / 2366
相关论文
共 50 条
  • [1] Station keeping control method based on deep reinforcement learning for stratospheric aerostat in dynamic wind field
    Bai, Fangchao
    Yang, Xixiang
    Deng, Xiaolong
    Ma, Zhenyu
    Long, Yuan
    ADVANCES IN SPACE RESEARCH, 2025, 75 (01) : 752 - 766
  • [2] Altitude control of stratospheric aerostat based on deep reinforcement learning
    Zhang J.
    Yang X.
    Deng X.
    Guo Z.
    Zhai J.
    Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2023, 49 (08): : 2062 - 2070
  • [3] Deep Learning for Station Keeping of AUVs
    Knudsen, Kristoffer Borgen
    Nielsen, Mikkel Cornelius
    Schjolberg, Ingrid
    OCEANS 2019 MTS/IEEE SEATTLE, 2019,
  • [4] Autonomous Trajectory Planning Method for Stratospheric Airship Regional Station-Keeping Based on Deep Reinforcement Learning
    Liu, Sitong
    Zhou, Shuyu
    Miao, Jinggang
    Shang, Hai
    Cui, Yuxuan
    Lu, Ying
    AEROSPACE, 2024, 11 (09)
  • [5] A Deep Reinforcement Learning Strategy for Surrounding Vehicles-Based Lane-Keeping Control
    Kim, Jihun
    Park, Sanghoon
    Kim, Jeesu
    Yoo, Jinwoo
    SENSORS, 2023, 23 (24)
  • [6] The Strategy for Lane-keeping Vehicle Tasks based on Deep Reinforcement Learning Continuous Control
    Li, Qianxi
    Fei, Rong
    PROCEEDINGS OF 2024 INTERNATIONAL CONFERENCE ON MACHINE INTELLIGENCE AND DIGITAL APPLICATIONS, MIDA2024, 2024, : 724 - 730
  • [7] Deep Reinforcement Learning for Automatic Generation Control of Wind Farms
    Vijayshankar, Sanjana
    Stanfel, Paul
    King, Jennifer
    Spyrou, Evangelia
    Johnson, Kathryn
    2021 AMERICAN CONTROL CONFERENCE (ACC), 2021, : 1796 - 1802
  • [8] Ensemble-based Deep Reinforcement Learning for robust cooperative wind farm control
    He, Binghao
    Zhao, Huan
    Liang, Gaoqi
    Zhao, Junhua
    Qiu, Jing
    Dong, Zhao Yang
    International Journal of Electrical Power and Energy Systems, 2022, 143
  • [9] Ensemble-based Deep Reinforcement Learning for robust cooperative wind farm control
    He, Binghao
    Zhao, Huan
    Liang, Gaoqi
    Zhao, Junhua
    Qiu, Jing
    Dong, Zhao Yang
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2022, 143
  • [10] Deep Reinforcement Learning-Based Wind Disturbance Rejection Control Strategy for UAV
    Ma, Qun
    Wu, Yibo
    Shoukat, Muhammad Usman
    Yan, Yukai
    Wang, Jun
    Yang, Long
    Yan, Fuwu
    Yan, Lirong
    DRONES, 2024, 8 (11)