Reinforcement Learning-Based Resource Allocation for Coverage Continuity in High Dynamic UAV Communication Networks

被引:5
|
作者
Li, Jiandong [1 ,2 ]
Zhou, Chengyi [1 ]
Liu, Junyu [1 ]
Sheng, Min [1 ]
Zhao, Nan [3 ]
Su, Yu [4 ]
机构
[1] Xidian Univ, Inst Informat Sci, State Key Lab Integrated Serv Networks, Xian 710071, Shaanxi, Peoples R China
[2] Peng Cheng Lab, Dept Broadband Commun, Shenzhen 518000, Guangdong, Peoples R China
[3] Dalian Univ Technol, Sch Informat & Commun Engn, Dalian 116024, Peoples R China
[4] China Mobile Cheng Du Inst Res & Dev, Chengdu 610096, Peoples R China
关键词
UAV networks; outdated CSI; deep reinforcement learning; resource allocation; POWER ALLOCATION; ACCESS; OPTIMIZATION;
D O I
10.1109/TWC.2023.3282909
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Unmanned aerial vehicles mounted aerial base stations (ABSs) are capable of providing on-demand coverage in next-generation mobile communication system. However, resource allocation for ABSs to provide continuous coverage is challenging, since the high dynamic of ABSs and time-varying air-to-ground channel would result in channel state information (CSI) mismatch between resource allocation decision and implementation. In consequence, the coverage of ABSs is discontinuous in spatial-temporal dimensions, i.e., the variance of user rate between adjacent time slots is large. To ensure the coverage continuity, we design a resource allocation method based on deep reinforcement learning (RDRL). Capable of adaptively tuning neural network structures, RDRL could satisfy coverage requirements by jointly allocating subchannels and power for ground users. Meanwhile, the temporal channel correlation is taken into account in the design of reward function in RDRL, which aims to alleviate the influence of CSI mismatch between method decision and implementation. Moreover, RDRL can apply a pre-trained model of previous coverage requirement to current requirement to reduce computation complexity. Experimental results show that the rate variance of RDRL can be reduced by 66.7% and spectral efficiency of RDRL can be increased by 34.7% compared with benchmark algorithms, which ensures the coverage continuity.
引用
收藏
页码:848 / 860
页数:13
相关论文
共 50 条
  • [21] Joint Coverage and Resource Allocation for Federated Learning in UAV-Enabled Networks
    Yahya, Mariam
    Maghsudi, Setareh
    2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 2476 - 2481
  • [22] Learning Based Dynamic Resource Allocation in UAV-assisted Mobile Crowdsensing Networks
    Liu, Wenshuai
    Zhou, Yuzhi
    Fu, Yaru
    2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [23] Resource Allocation in UAV-Assisted Wireless Networks Using Reinforcement Learning
    Luong, Phuong
    Gagnon, Francois
    Labeau, Fabrice
    2020 IEEE 92ND VEHICULAR TECHNOLOGY CONFERENCE (VTC2020-FALL), 2020,
  • [24] A Reinforcement Learning-Based Resource Allocation Scheme for Cloud Robotics
    Liu, Hang
    Liu, Shiwen
    Zheng, Kan
    IEEE ACCESS, 2018, 6 : 17215 - 17222
  • [25] Reinforcement Learning-Based Resource Allocation for Multiple Vehicles with Communication-Assisted Sensing Mechanism
    Fan, Yuxin
    Fei, Zesong
    Huang, Jingxuan
    Wang, Xinyi
    ELECTRONICS, 2024, 13 (13)
  • [26] Deep Reinforcement Learning-Based Resource Allocation for Integrated Sensing, Communication, and Computation in Vehicular Network
    Yang, Liu
    Wei, Yifei
    Feng, Zhiyong
    Zhang, Qixun
    Han, Zhu
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (12) : 18608 - 18622
  • [27] UAV spatiotemporal crowdsourcing resource allocation based on deep reinforcement learning
    面向工业场景的无人机时空众包资源分配
    Huangfu, Wei (huangfuwei@ustb.edu.cn), 2025, 47 (01): : 91 - 100
  • [28] Dynamic Resource Allocation With Deep Reinforcement Learning in Multibeam Satellite Communication
    Deng, Danhao
    Wang, Chaowei
    Pang, Mingliang
    Wang, Weidong
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2023, 12 (01) : 75 - 79
  • [29] Intelligence-based Reinforcement Learning for Continuous Dynamic Resource Allocation in Vehicular Networks
    Wang, Yuhang
    He, Ying
    Yu, F. Richard
    Wu, Kaishun
    2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [30] Reinforcement Learning Based Dynamic Resource Allocation for Massive MTC in Sliced Mobile Networks
    Yang, Bei
    Xu, Yiqian
    She, Xiaoming
    Zhu, Jianchi
    Wei, Fengsheng
    Cheri, Peng
    Wang, Jianxiu
    2022 IEEE 14TH INTERNATIONAL CONFERENCE ON ADVANCED INFOCOMM TECHNOLOGY (ICAIT 2022), 2022, : 298 - 303