Energy consumption optimisation for unmanned aerial vehicle based on reinforcement learning framework

被引:0
|
作者
Wang Z. [1 ]
Xing Y. [1 ]
机构
[1] Department of Aerospace, Cranfield University, College Rd, Wharley End, Bedford
关键词
energy efficiency; machine learning; path planning; power consumption; Q-Learning; reinforcement learning; RL; trajectory optimisation;
D O I
10.1504/IJPT.2024.138001
中图分类号
学科分类号
摘要
The average battery life of drones in use today is around 30 minutes, which poses significant limitations for ensuring long-range operation, such as seamless delivery and security monitoring. Meanwhile, the transportation sector is responsible for 93% of all carbon emissions, making it crucial to control energy usage during the operation of UAVs for future net-zero massive-scale air traffic. In this study, a reinforcement learning (RL)-based model was implemented for the energy consumption optimisation of drones. The RL-based energy optimisation framework dynamically tunes vehicle control systems to maximise energy economy while considering mission objectives, ambient circumstances, and system performance. RL was used to create a dynamically optimised vehicle control system that selects the most energy-efficient route. Based on training times, it is reasonable to conclude that a trained UAV saves between 50.1% and 91.6% more energy than an untrained UAV in this study by using the same map. © 2024 Inderscience Publishers. All rights reserved.
引用
收藏
页码:75 / 94
页数:19
相关论文
共 50 条
  • [1] Energy Consumption Optimization of Unmanned Aerial Vehicle Assisted Mobile Edge Computing Systems Based on Deep Reinforcement Learning
    Zhang, Guangchi
    He, Zinan
    Cui, Miao
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2023, 45 (05) : 1635 - 1643
  • [2] Fuzzing for Unmanned Aerial Vehicle System Based on Reinforcement Learning
    Yu, Zhenhua
    Yang, Wenjian
    Li, Xiteng
    Cong, Xuya
    Computer Engineering and Applications, 2024, 60 (21) : 89 - 98
  • [3] Cooperatively pursuing a target unmanned aerial vehicle by multiple unmanned aerial vehicles based on multiagent reinforcement learning
    Wang X.
    Xuan S.
    Ke L.
    Advanced Control for Applications: Engineering and Industrial Systems, 2020, 2 (02):
  • [4] Trajectory planning for unmanned aerial vehicle slungHpayload aerial transportation system based on reinforcement learning
    基于强化学习的无人机吊挂负载系统轨迹规划
    1600, Editorial Board of Jilin University (51): : 2259 - 2267
  • [5] A Reinforcement Learning Method Based on an Improved Sampling Mechanism for Unmanned Aerial Vehicle Penetration
    Wang, Yue
    Li, Kexv
    Zhuang, Xing
    Liu, Xinyu
    Li, Hanyu
    AEROSPACE, 2023, 10 (07)
  • [6] Distributed Unmanned Aerial Vehicle Cluster Testing Method Based on Deep Reinforcement Learning
    Li, Dong
    Yang, Panfei
    APPLIED SCIENCES-BASEL, 2024, 14 (23):
  • [7] Reinforcement Learning-Based Optimal Flat Spin Recovery for Unmanned Aerial Vehicle
    Kim, Donghae
    Oh, Gyeongtaek
    Seo, Yongjun
    Kim, Youdan
    JOURNAL OF GUIDANCE CONTROL AND DYNAMICS, 2017, 40 (04) : 1074 - 1081
  • [8] Development of Unmanned Aerial Vehicle Navigation and Warehouse Inventory System Based on Reinforcement Learning
    Lin, Huei-Yung
    Chang, Kai-Lun
    Huang, Hsin-Ying
    DRONES, 2024, 8 (06)
  • [9] Monte Carlo-based reinforcement learning control for unmanned aerial vehicle systems
    Wei, Qinglai
    Yang, Zesheng
    Su, Huaizhong
    Wang, Lijian
    NEUROCOMPUTING, 2022, 507 : 282 - 291
  • [10] Review of unmanned aerial vehicle intelligent networking technology and applications based on reinforcement learning
    Qiu, Xiulin
    Song, Bo
    Yin, Jun
    Xu, Lei
    Ke, Yaqi
    Liao, Zhenqiang
    Yang, Yuwang
    Harbin Gongcheng Daxue Xuebao/Journal of Harbin Engineering University, 2024, 45 (08): : 1576 - 1589