Intelligent Spectrum and Airspace Resource Management for Urban Air Mobility Using Deep Reinforcement Learning

被引:0
|
作者
Apaza, Rafael D. [1 ,2 ]
Han, Ruixuan [2 ]
Li, Hongxiang [2 ]
Knoblock, Eric J. [1 ]
机构
[1] NASA Glenn Res Ctr, Cleveland, OH 44135 USA
[2] Univ Louisville, Dept Elect & Comp Engn, Louisville, KY 40292 USA
来源
IEEE ACCESS | 2024年 / 12卷
基金
美国国家航空航天局;
关键词
Aircraft; Air traffic control; Base stations; Uplink; Time-frequency analysis; Resource management; Radio spectrum management; Downlink; Signal to noise ratio; Artificial intelligence; Aeronautics; artificial intelligence; spectrum management; resource allocation; urban air mobility; wireless communications;
D O I
10.1109/ACCESS.2024.3492113
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In an era dominated by a surge in air travel and heightened reliance on efficient communication systems, there's a critical need to smartly allocate frequency resources for aviation communications to efficiently manage airspace operations. This is essential to ensure safe, smooth, and technologically advanced flight services. Over time, techniques for managing frequency resources and new radio technologies have evolved to cope with the increased demands on the system due to growing airspace activities. With the development of Urban Air Mobility (UAM) operations, a fresh challenge has emerged, further burdening the already limited aviation spectrum. There is a pressing need for a new approach to efficiently manage and utilize frequencies. This paper explores the application of Multi-agent Reinforcement Learning (MARL) technique to minimize aircraft mission completion time and enhance safety, all while dealing with the limitations of airspace and frequency resources. The proposed MARL approach utilizes the Value Decomposition Network (VDN) technique to optimize frequency use, flight time, and departure wait times by managing spectrum allocation, vehicle departure, and flight speed. To achieve the goal of minimizing mission completion time, the Markov Decision Process (MDP) is utilized. It takes into account factors like frequency channel availability, signal-to-interference-plus-noise power ratio, aircraft location, and flight status. In our investigation, we develop a case study scenario and assess the performance of the MARL technique through simulation in a hypothetical UAM scenario. The solution is evaluated against Q-Mixing (QMIX), Orthogonal Multiple Access and a Heuristic Greedy Algorithm.
引用
收藏
页码:164750 / 164766
页数:17
相关论文
共 50 条
  • [21] Prescribing optimal health-aware operation for urban air mobility with deep reinforcement learning
    Montazeri, Mina
    Kulkarni, Chetan S.
    Fink, Olga
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2025, 259
  • [22] Deep Reinforcement Learning Based Resource Allocation for Intelligent Reflecting Surface Assisted Dynamic Spectrum Sharing
    Guo, Jianxin
    Wang, Zhe
    Li, Jun
    Zhang, Jie
    2022 14TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING, WCSP, 2022, : 1178 - 1183
  • [23] DRJOA: intelligent resource management optimization through deep reinforcement learning approach in edge computing
    Yifan Chen
    Shaomiao Chen
    Kuan-Ching Li
    Wei Liang
    Zhiyong Li
    Cluster Computing, 2023, 26 : 2897 - 2911
  • [24] DRJOA: intelligent resource management optimization through deep reinforcement learning approach in edge computing
    Chen, Yifan
    Chen, Shaomiao
    Li, Kuan-Ching
    Liang, Wei
    Li, Zhiyong
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2023, 26 (05): : 2897 - 2911
  • [25] An Adaptive Airspace Model for Quadcopters in Urban Air Mobility
    Shao, Quan
    Li, Ruoheng
    Dong, Min
    Song, Chengcheng
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (02) : 1702 - 1711
  • [26] Deep Reinforcement Learning for Resource Management in Network Slicing
    Li, Rongpeng
    Zhao, Zhifeng
    Sun, Qi
    I, Chih-Lin
    Yang, Chenyang
    Chen, Xianfu
    Zhao, Minjian
    Zhang, Honggang
    IEEE ACCESS, 2018, 6 : 74429 - 74441
  • [27] An intelligent resource management method in SDN based fog computing using reinforcement learning
    Anoushee, Milad
    Fartash, Mehdi
    Torkestani, Javad Akbari
    COMPUTING, 2024, 106 (04) : 1051 - 1080
  • [28] An intelligent resource management method in SDN based fog computing using reinforcement learning
    Milad Anoushee
    Mehdi Fartash
    Javad Akbari Torkestani
    Computing, 2024, 106 : 1051 - 1080
  • [29] Access and Radio Resource Management for IAB Networks Using Deep Reinforcement Learning
    Sande, Malcolm M.
    Hlophe, Mduduzi C.
    Maharaj, Bodhaswar T.
    IEEE ACCESS, 2021, 9 : 114218 - 114234
  • [30] Quality of service based radar resource management using deep reinforcement learning
    Durst, Sebastian
    Brueggenwirth, Stefan
    2021 IEEE RADAR CONFERENCE (RADARCONF21): RADAR ON THE MOVE, 2021,