Reinforcement learning-based hybrid spectrum resource allocation scheme for the high load of URLLC services

被引:0
|
作者
Qian Huang
Xianzhong Xie
Mohamed Cheriet
机构
[1] Chongqing University of Posts and Telecommunications,School of Computer Science and Technology
[2] Université du Québec,École de Technologiesupérieure
关键词
Ultra-reliable and low-latency communication; Radio resource allocation; mmWave; Hybrid spectrum; Reinforcement learning; Multipath deep neural network;
D O I
暂无
中图分类号
学科分类号
摘要
Ultra-reliable and low-latency communication (URLLC) in mobile networks is still one of the core solutions that require thorough research in 5G and beyond. With the vigorous development of various emerging URLLC technologies, resource shortages will soon occur even in mmWave cells with rich spectrum resources. As a result of the large radio resource space of mmWave, traditional real-time resource scheduling decisions can cause serious delays. Consequently, we investigate a delay minimization problem with the spectrum and power constraints in the mmWave hybrid access network. To reduce the delay caused by high load and radio resource shortage, a hybrid spectrum and power resource allocation scheme based on reinforcement learning (RL) is proposed. We compress the state space and the action space by temporarily dumping and decomposing the action. The multipath deep neural network and policy gradient method are used, respectively, as the approximater and update method of the parameterized policy. The experimental results reveal that the RL-based hybrid spectrum and the power resource allocation scheme eventually converged after a limited number of iterative learnings. Compared with other schemes, the RL-based scheme can effectively guarantee the URLLC delay constraint when the load does not exceed 130%.
引用
收藏
相关论文
共 50 条
  • [1] Reinforcement learning-based hybrid spectrum resource allocation scheme for the high load of URLLC services
    Huang, Qian
    Xie, Xianzhong
    Cheriet, Mohamed
    EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, 2020, 2020 (01)
  • [2] A Reinforcement Learning-Based Resource Allocation Scheme for Cloud Robotics
    Liu, Hang
    Liu, Shiwen
    Zheng, Kan
    IEEE ACCESS, 2018, 6 : 17215 - 17222
  • [3] Reinforcement Learning-based Joint Power and Resource Allocation for URLLC in 5G
    Elsayed, Medhat
    Erol-Kantarci, Melike
    2019 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2019,
  • [4] A reinforcement learning-based computing offloading and resource allocation scheme in F-RAN
    Jiang, Fan
    Ma, Rongxin
    Gao, Youjun
    Gu, Zesheng
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING, 2021, 2021 (01)
  • [5] A reinforcement learning-based computing offloading and resource allocation scheme in F-RAN
    Fan Jiang
    Rongxin Ma
    Youjun Gao
    Zesheng Gu
    EURASIP Journal on Advances in Signal Processing, 2021
  • [6] Deep Reinforcement Learning-Based Spectrum Allocation Algorithm in Internet of Vehicles Discriminating Services
    Guan, Zheng
    Wang, Yuyang
    He, Min
    APPLIED SCIENCES-BASEL, 2022, 12 (03):
  • [7] Learning-Based Cooperative Multiplexing Mode Selection and Resource Allocation for eMBB and uRLLC
    Chi, Xiaoyu
    Xu, Xiaodong
    Han, Shujun
    Zhang, Jingxuan
    2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 836 - 841
  • [8] A Deep Reinforcement Learning Scheme for Spectrum Sensing and Resource Allocation in ITS
    Wei, Huang
    Peng, Yuyang
    Yue, Ming
    Long, Jiale
    AL-Hazemi, Fawaz
    Mirza, Mohammad Meraj
    MATHEMATICS, 2023, 11 (16)
  • [9] A Reinforcement Learning-Based Green Resource Allocation for Heterogeneous Services in Cooperative Cognitive Radio Networks
    Kaur, Amandeep
    Kumar, Krishan
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2022, 19 (02): : 1554 - 1566
  • [10] Deep Reinforcement Learning Based Resource Allocation for URLLC User-Centric Network
    Hu, Fajin
    Zhao, Junhui
    Liao, Jieyu
    Zhang, Huan
    2022 14TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING, WCSP, 2022, : 522 - 526