Energy consumption optimisation for unmanned aerial vehicle based on reinforcement learning framework

被引:0
|
作者
Wang Z. [1 ]
Xing Y. [1 ]
机构
[1] Department of Aerospace, Cranfield University, College Rd, Wharley End, Bedford
关键词
energy efficiency; machine learning; path planning; power consumption; Q-Learning; reinforcement learning; RL; trajectory optimisation;
D O I
10.1504/IJPT.2024.138001
中图分类号
学科分类号
摘要
The average battery life of drones in use today is around 30 minutes, which poses significant limitations for ensuring long-range operation, such as seamless delivery and security monitoring. Meanwhile, the transportation sector is responsible for 93% of all carbon emissions, making it crucial to control energy usage during the operation of UAVs for future net-zero massive-scale air traffic. In this study, a reinforcement learning (RL)-based model was implemented for the energy consumption optimisation of drones. The RL-based energy optimisation framework dynamically tunes vehicle control systems to maximise energy economy while considering mission objectives, ambient circumstances, and system performance. RL was used to create a dynamically optimised vehicle control system that selects the most energy-efficient route. Based on training times, it is reasonable to conclude that a trained UAV saves between 50.1% and 91.6% more energy than an untrained UAV in this study by using the same map. © 2024 Inderscience Publishers. All rights reserved.
引用
收藏
页码:75 / 94
页数:19
相关论文
共 50 条
  • [21] Autonomous control of unmanned aerial vehicle for chemical detection using deep reinforcement learning
    Byun, Hyung Joon
    Nam, Hyunwoo
    ELECTRONICS LETTERS, 2022, 58 (11) : 423 - 425
  • [22] The Effects of Rewards on Autonomous Unmanned Aerial Vehicle (UAV) Operations Using Reinforcement Learning
    Virani, Hemali
    Liu, Dahai
    Vincenzi, Dennis
    UNMANNED SYSTEMS, 2021, 9 (04) : 349 - 360
  • [23] Haze removal for unmanned aerial vehicle aerial video based on spatial-temporal coherence optimisation
    Zhao, Xintao
    Ding, Wenrui
    Liu, Chunhui
    Li, Hongguang
    IET IMAGE PROCESSING, 2018, 12 (01) : 88 - 97
  • [24] Development of a Framework for a Circulation Control-Based Unmanned Aerial Vehicle
    Saka, Pranith Chander
    Kanistras, Konstantinos
    Valavanis, Kimon P.
    Rutherford, Matthew J.
    2016 IEEE AEROSPACE CONFERENCE, 2016,
  • [25] Heterogeneous mission planning for a single unmanned aerial vehicle (UAV) with attention-based deep reinforcement learning
    Jung, Minjae
    Oh, Hyondong
    PEERJ COMPUTER SCIENCE, 2022, 8
  • [26] Vision-Based Autonomous Landing of a Multi-Copter Unmanned Aerial Vehicle using Reinforcement Learning
    Lee, Seongheon
    Shim, Taemin
    Kim, Sungjoong
    Park, Junwoo
    Hong, Kyungwoo
    Bang, Hyochoong
    2018 INTERNATIONAL CONFERENCE ON UNMANNED AIRCRAFT SYSTEMS (ICUAS), 2018, : 108 - 114
  • [27] A Fault-Tolerant Multi-Agent Reinforcement Learning Framework for Unmanned Aerial Vehicles-Unmanned Ground Vehicle Coverage Path Planning
    Ramezani, Mahya
    Atashgah, M. A. Amiri
    Rezaee, Alireza
    DRONES, 2024, 8 (10)
  • [28] Fixed-time convergence attitude control for a tilt trirotor unmanned aerial vehicle based on reinforcement learning
    Xie, Tian
    Xian, Bin
    Gu, Xu
    ISA TRANSACTIONS, 2023, 132 : 477 - 489
  • [29] Unmanned-Aerial-Vehicle-Assisted Computation Offloading for Mobile Edge Computing Based on Deep Reinforcement Learning
    Wang, Hui
    Ke, Hongchang
    Sun, Weijia
    IEEE ACCESS, 2020, 8 : 180784 - 180798
  • [30] Heterogeneous mission planning for a single unmanned aerial vehicle (UAV) with attention-based deep reinforcement learning
    Jung M.
    Oh H.
    PeerJ Computer Science, 2022, 8