Intelligent Spectrum and Airspace Resource Management for Urban Air Mobility Using Deep Reinforcement Learning

被引:0
|
作者
Apaza, Rafael D. [1 ,2 ]
Han, Ruixuan [2 ]
Li, Hongxiang [2 ]
Knoblock, Eric J. [1 ]
机构
[1] NASA Glenn Res Ctr, Cleveland, OH 44135 USA
[2] Univ Louisville, Dept Elect & Comp Engn, Louisville, KY 40292 USA
来源
IEEE ACCESS | 2024年 / 12卷
基金
美国国家航空航天局;
关键词
Aircraft; Air traffic control; Base stations; Uplink; Time-frequency analysis; Resource management; Radio spectrum management; Downlink; Signal to noise ratio; Artificial intelligence; Aeronautics; artificial intelligence; spectrum management; resource allocation; urban air mobility; wireless communications;
D O I
10.1109/ACCESS.2024.3492113
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In an era dominated by a surge in air travel and heightened reliance on efficient communication systems, there's a critical need to smartly allocate frequency resources for aviation communications to efficiently manage airspace operations. This is essential to ensure safe, smooth, and technologically advanced flight services. Over time, techniques for managing frequency resources and new radio technologies have evolved to cope with the increased demands on the system due to growing airspace activities. With the development of Urban Air Mobility (UAM) operations, a fresh challenge has emerged, further burdening the already limited aviation spectrum. There is a pressing need for a new approach to efficiently manage and utilize frequencies. This paper explores the application of Multi-agent Reinforcement Learning (MARL) technique to minimize aircraft mission completion time and enhance safety, all while dealing with the limitations of airspace and frequency resources. The proposed MARL approach utilizes the Value Decomposition Network (VDN) technique to optimize frequency use, flight time, and departure wait times by managing spectrum allocation, vehicle departure, and flight speed. To achieve the goal of minimizing mission completion time, the Markov Decision Process (MDP) is utilized. It takes into account factors like frequency channel availability, signal-to-interference-plus-noise power ratio, aircraft location, and flight status. In our investigation, we develop a case study scenario and assess the performance of the MARL technique through simulation in a hypothetical UAM scenario. The solution is evaluated against Q-Mixing (QMIX), Orthogonal Multiple Access and a Heuristic Greedy Algorithm.
引用
收藏
页码:164750 / 164766
页数:17
相关论文
共 50 条
  • [41] Designing airspace for urban air mobility: A review of concepts and approaches
    Bauranov, Aleksandar
    Rakas, Jasenka
    PROGRESS IN AEROSPACE SCIENCES, 2021, 125
  • [42] Resource Management at the Network Edge: A Deep Reinforcement Learning Approach
    Zeng, Deze
    Gu, Lin
    Pan, Shengli
    Cai, Jingjing
    Guo, Song
    IEEE NETWORK, 2019, 33 (03): : 26 - 33
  • [43] Deep Reinforcement Learning for Resource Management on Network Slicing: A Survey
    Hurtado Sanchez, Johanna Andrea
    Casilimas, Katherine
    Caicedo Rendon, Oscar Mauricio
    SENSORS, 2022, 22 (08)
  • [44] A Hierarchical Framework of Cloud Resource Allocation and Power Management Using Deep Reinforcement Learning
    Liu, Ning
    Li, Zhe
    Xu, Jielong
    Xu, Zhiyuan
    Lin, Sheng
    Qiu, Qinru
    Tang, Jian
    Wang, Yanzhi
    2017 IEEE 37TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2017), 2017, : 372 - 382
  • [45] Implementation of Trusted Traceability Query Using Blockchain and Deep Reinforcement Learning in Resource Management
    Jiang, Yunting
    Lei, Yalin
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [46] Learning Urban Driving Policies using Deep Reinforcement Learning
    Agarwal, Tanmay
    Arora, Hitesh
    Schneider, Jeff
    2021 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2021, : 607 - 614
  • [47] Deep Reinforcement Learning Based Intelligent Resource Allocation in Hybrid Vehicle Scenario
    Lou, Chengkai
    Hou, Fen
    Li, Bo
    Ding, Hongwei
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2025, 74 (03) : 4656 - 4668
  • [48] Resource Management and Reflection Optimization for Intelligent Reflecting Surface Assisted Multi-Access Edge Computing Using Deep Reinforcement Learning
    Wang, Zhaoying
    Wei, Yifei
    Feng, Zhiyong
    Yu, F. Richard
    Han, Zhu
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2023, 22 (02) : 1175 - 1186
  • [49] Tactical conflict resolution in urban airspace for unmanned aerial vehicles operations using attention-based deep reinforcement learning
    Zhang, Mingcheng
    Yan, Chao
    Dai, Wei
    Xiang, Xiaojia
    Low, Kin Huat
    GREEN ENERGY AND INTELLIGENT TRANSPORTATION, 2023, 2 (04):
  • [50] Smart Scheduling of EVs Through Intelligent Home Energy Management Using Deep Reinforcement Learning
    Suleman, Ahmad
    Amin, M. Asim
    Fatima, Mahnoor
    Asad, Bilal
    Menghwar, Mohan
    Hashmi, Muhammad Adnan
    2022 17TH INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES (ICET'22), 2022, : 18 - 24