Traffic and Obstacle-Aware UAV Positioning in Urban Environments Using Reinforcement Learning

被引:0
|
作者
Shafafi, Kamran [1 ]
Ricardo, Manuel [1 ]
Campos, Rui [1 ]
机构
[1] Univ Porto, Fac Engn, INESC TEC, P-4200465 Porto, Portugal
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Unmanned aerial vehicles; UAV positioning; aerial networks; LoS communications technology; reinforcement learning; high-capacity communications; positioning algorithms; NETWORKS;
D O I
10.1109/ACCESS.2024.3515654
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Unmanned Aerial Vehicles (UAVs) are suited as cost-effective and adaptable platforms for carrying Wi-Fi Access Points (APs) and cellular Base Stations (BSs). Implementing aerial networks in disaster management scenarios and crowded areas can effectively enhance Quality of Service (QoS). Maintaining Line-of-Sight (LoS) in such environments, especially at higher frequencies, is crucial for ensuring reliable communication networks with high capacity, particularly in environments with obstacles. The main contribution of this paper is a traffic- and obstacle-aware UAV positioning algorithm named Reinforcement Learning-based Traffic and Obstacle-aware Positioning Algorithm (RLTOPA), for such environments. RLTOPA determines the optimal position of the UAV by considering the positions of ground users, the coordinates of obstacles, and the traffic demands of users. This positioning aims to maximize QoS in terms of throughput by ensuring optimal LoS between ground users and the UAV. The network performance of the proposed solution, characterized in terms of mean delay and throughput, was evaluated using the ns-3 simulator. The results show up to 95% improvement in aggregate throughput and 71% in delay without compromising fairness.
引用
收藏
页码:188652 / 188663
页数:12
相关论文
共 50 条
  • [31] Multiagent Reinforcement Learning for Urban Traffic Control Using Coordination Graphs
    Kuyer, Lior
    Whiteson, Shimon
    Bakker, Bram
    Vlassis, Nikos
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, PART I, PROCEEDINGS, 2008, 5211 : 656 - +
  • [32] UAV Detection Using Reinforcement Learning
    Alkhonaini, Arwa
    Sheltami, Tarek
    Mahmoud, Ashraf
    Imam, Muhammad
    SENSORS, 2024, 24 (06)
  • [33] UAV Pursuit using Reinforcement Learning
    Bonnet, Alexandre
    Akhloufi, Moulay A.
    UNMANNED SYSTEMS TECHNOLOGY XXI, 2019, 11021
  • [34] Robust Motion Control for UAV in Dynamic Uncertain Environments Using Deep Reinforcement Learning
    Wan, Kaifang
    Gao, Xiaoguang
    Hu, Zijian
    Wu, Gaofeng
    REMOTE SENSING, 2020, 12 (04)
  • [35] UAV Navigation in 3D Urban Environments with Curriculum-based Deep Reinforcement Learning
    de Carvalho, Kevin Braathen
    de Oliveira, Iure Rosa L.
    Brandao, Alexandre S.
    2023 INTERNATIONAL CONFERENCE ON UNMANNED AIRCRAFT SYSTEMS, ICUAS, 2023, : 1249 - 1255
  • [36] Learning obstacle avoidance and predation in complex reef environments with deep reinforcement learning
    Hou, Ji
    He, Changling
    Li, Tao
    Zhang, Chunze
    Zhou, Qin
    BIOINSPIRATION & BIOMIMETICS, 2024, 19 (05)
  • [37] A Deep Reinforcement Learning Framework for UAV Navigation in Indoor Environments
    Walker, Ory
    Vanegas, Fernando
    Gonzalez, Felipe
    Koenig, Sven
    2019 IEEE AEROSPACE CONFERENCE, 2019,
  • [38] Optimization of Obstacle Avoidance Using Reinforcement Learning
    Kominami, Keishi
    Takubo, Tomohito
    Ohara, Kenichi
    Mae, Yasushi
    Arai, Tatsuo
    2012 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII), 2012, : 67 - 72
  • [39] Virtual Tube Visual Obstacle Avoidance for UAV Based on Deep Reinforcement Learning
    Zhao, Jing
    Pei, Zi-Nan
    Jiang, Bin
    Lu, Ning-Yun
    Zhao, Fei
    Chen, Shu-Feng
    Zidonghua Xuebao/Acta Automatica Sinica, 2024, 50 (11): : 2245 - 2258
  • [40] Autonomous Obstacle Avoidance and Target Tracking of UAV Based on Deep Reinforcement Learning
    Guoqiang Xu
    Weilai Jiang
    Zhaolei Wang
    Yaonan Wang
    Journal of Intelligent & Robotic Systems, 2022, 104