Deep Reinforcement Learning Aided Cell Outage Compensation Framework in 5G Cloud Radio Access Networks

被引:0
|
作者
Peng Yu
Xiao Yang
Fanqin Zhou
Hao Li
Lei Feng
Wenjing Li
Xuesong Qiu
机构
[1] Beijing University of Posts and Telecommunications,State Key Laboratory of Networking and Switching Technology
来源
Mobile Networks and Applications | 2020年 / 25卷
关键词
5G C-RAN; Deep reinforcement learning; Cell outage compensation;
D O I
暂无
中图分类号
学科分类号
摘要
As one of the key technologies of 5G, Cloud Radio Access Networks (C-RAN) with cloud BBUs (Base Band Units) pool architecture and distributed RRHs (Remote Radio Heads) can provide the ubiquitous services. When failure occurs at RRH, it can’t be alleviated in time and will lead to a significant drop in network performance. Therefore, the cell outage compensation (COC) problem for RRH in 5G C-RAN is very important. Although deep reinforcement learning (DRL) has been applied to many scenarios related to the self-organizing network (SON), there are fewer applications for cell outage compensation. And most intelligent algorithms are hard to obtain globally optimized solutions. In this paper, aiming at the cell outage scenario in C-RAN with the goal of maximizing the energy efficiency, connectivity of RRH while meeting service quality demands of each compensation user, a framework based on DRL is presented to solve it. Firstly, compensation users are allocated to adjacent RRHs by using the K-means clustering algorithm. Secondly, DQN is used to find the antenna downtilt and the power allocated to compensation users. Comparing to different genetic algorithms, simulation result shows that the proposed framework converges quickly and tends to be stable, and reaches 95% of the maximum target value. It verifies the efficiency of the DRL-based framework and its effectiveness in meeting user requirements and handling cell outage compensation.
引用
收藏
页码:1644 / 1654
页数:10
相关论文
共 50 条
  • [41] Energy-efficient cloud radio access networks by cloud based workload consolidation for 5G
    Sigwele, Tshiamo
    Alam, Atm S.
    Pillai, Prashant
    Hu, Yim F.
    JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2017, 78 : 1 - 8
  • [42] A Federated Reinforcement Learning Framework for Incumbent Technologies in Beyond 5G Networks
    Ali, Rashid
    Bin Zikria, Yousaf
    Garg, Sahil
    Bashir, Ali Kashif
    Obaidat, Mohammad S.
    Kim, Hyung Seok
    IEEE NETWORK, 2021, 35 (04): : 152 - 159
  • [43] Uplink Power Control Framework Based on Reinforcement Learning for 5G Networks
    Costa Neto, Francisco Hugo
    Araujo, Daniel Costa
    Mota, Mateus Pontes
    Maciel, Tarcisio F.
    de Almeida, Andr L. F.
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (06) : 5734 - 5748
  • [44] Deep Reinforcement Learning for Downlink Scheduling in 5G and Beyond Networks: A Review
    Seguin, Michael
    Omer, Anjali
    Koosha, Mohammad
    Malandra, Filippo
    Mastronarde, Nicholas
    2023 IEEE 34TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, PIMRC, 2023,
  • [45] Virtualized Cloud Radio Access Network for 5G Transport
    Wang, Xinbo
    Cavdar, Cicek
    Wang, Lin
    Tornatore, Massimo
    Chung, Hwan Seok
    Lee, Han Hyub
    Park, Soo Myung
    Mukherjee, Biswanath
    IEEE COMMUNICATIONS MAGAZINE, 2017, 55 (09) : 202 - 209
  • [46] Radio Access Network Coordination Framework Toward 5G Mobile Wireless Networks
    Ngoc-Dung Dao
    Zhang, Hang
    Li, Xu
    Leroux, Philippe
    2015 INTERNATIONAL CONFERENCE ON COMPUTING, NETWORKING AND COMMUNICATIONS (ICNC), 2015, : 1039 - 1043
  • [47] Data Analytics Architectural Framework for Smarter Radio Resource Management in 5G Radio Access Networks
    Ferrus, Ramon
    Sallent, Oriol
    Perez-Romero, Jordi
    IEEE COMMUNICATIONS MAGAZINE, 2020, 58 (05) : 98 - 104
  • [48] Enhanced Machine Learning Scheme for Energy Efficient Resource Allocation in 5G Heterogeneous Cloud Radio Access Networks
    AlQerm, Ismail
    Shihada, Basem
    2017 IEEE 28TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR, AND MOBILE RADIO COMMUNICATIONS (PIMRC), 2017,
  • [49] BESS Aided Renewable Energy Supply Using Deep Reinforcement Learning for 5G and Beyond
    Yuan, Hao
    Tang, Guoming
    Guo, Deke
    Wu, Kui
    Shao, Xun
    Yu, Keping
    Wei, Wei
    IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, 2022, 6 (02): : 669 - 684
  • [50] Deep Reinforcement Learning and Graph Neural Networks for Efficient Resource Allocation in 5G Networks
    Randall, Martin
    Belzarena, Pablo
    Larroca, Federico
    Casas, Pedro
    2022 IEEE LATIN-AMERICAN CONFERENCE ON COMMUNICATIONS (LATINCOM), 2022,