Station keeping control for aerostat in wind fields based on deep reinforcement learning

被引:0
|
作者
Bai, Fangchao [1 ]
Yang, Xixiang [1 ]
Deng, Xiaolong [1 ]
Hou, Zhongxi [1 ]
机构
[1] College of Aerospace Science and Engineering, National University of Defense Technology, Changsha,410073, China
关键词
Aerodynamics - Airships - Deep learning - Learning algorithms - Markov processes;
D O I
10.13700/j.bh.1001-5965.2022.0629
中图分类号
学科分类号
摘要
In this paper, a stratospheric aerostat station keeping model is established. Based on Markov decision process, Double Deep Q-learning with prioritized experience replay is applied to stratospheric aerostat station keeping control under dynamic and non-dynamic conditions. Ultimately, metrics like the average station keeping radius and the station keeping effective time ratio are used to assess the effectiveness of the station keeping control approach. The simulation analysis results show that: under the mission the station keeping radius is 50 km and the station keeping time is three days, in the case of no power propulsion, the average station keeping radius of the stratospheric aerostat is 28.16 km, the station keeping effective time ratio is 83%. In the case of powered propulsion, the average station keeping radius of the stratospheric aerostat is significantly increased. The powered stratospheric aerostat can achieve flight control with a station keeping radius of 20 km, an average station keeping radius of 8.84 km, and a station keeping effective time ratio of 100%. © 2024 Beijing University of Aeronautics and Astronautics (BUAA). All rights reserved.
引用
收藏
页码:2354 / 2366
相关论文
共 50 条
  • [31] Model-based deep reinforcement learning for wind energy bidding
    Sanayha, Manassakan
    Vateekul, Peerapon
    INTERNATIONAL JOURNAL OF ELECTRICAL POWER & ENERGY SYSTEMS, 2022, 136
  • [32] DRAG: Deep Reinforcement Learning Based Base Station Activation in Heterogeneous Networks
    Ye, Junhong
    Zhang, Ying-Jun Angela
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2020, 19 (09) : 2076 - 2087
  • [33] Deep Reinforcement Learning of UAV Tracking Control Under Wind Disturbances Environments
    Ma, Bodi
    Liu, Zhenbao
    Dang, Qingqing
    Zhao, Wen
    Wang, Jingyan
    Cheng, Yao
    Yuan, Zhirong
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [34] Model-Free Reinforcement Learning based Lateral Control for Lane Keeping
    Zhang, Qichao
    Luo, Rui
    Zhao, Dongbin
    Luo, Chaomin
    Qian, Dianwei
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [35] Reinforcement Learning and Deep Learning Based Lateral Control for Autonomous Driving
    Li, Dong
    Zhao, Dongbin
    Zhang, Qichao
    Chen, Yaran
    IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE, 2019, 14 (02) : 83 - 98
  • [36] Keeping in Touch with Collaborative UAVs: A Deep Reinforcement Learning Approach
    Yang, Bo
    Liu, Min
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 562 - 568
  • [37] THERMAL STATION MODELLING AND OPTIMAL CONTROL BASED ON DEEP LEARNING
    Cao, Min
    THERMAL SCIENCE, 2021, 25 (04): : 2965 - 2973
  • [38] Station-Keeping Control of Stratospheric Balloons Based on Simultaneous Optimistic Optimization in Dynamic Wind
    Fan, Yuanqiao
    Deng, Xiaolong
    Yang, Xixiang
    Long, Yuan
    Bai, Fangchao
    ELECTRONICS, 2024, 13 (20)
  • [39] Adaptive Wind Feedforward Control of an Unmanned Surface Vehicle for Station Keeping
    Qu, Huajin
    von Ellenrieder, Karl D.
    OCEANS 2015 - MTS/IEEE WASHINGTON, 2015,
  • [40] Deep reinforcement learning based control for Autonomous Vehicles in CARLA
    Perez-Gil, Oscar
    Barea, Rafael
    Lopez-Guillen, Elena
    Bergasa, Luis M.
    Gomez-Huelamo, Carlos
    Gutierrez, Rodrigo
    Diaz-Diaz, Alejandro
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (03) : 3553 - 3576