Reinforcement Learning for Traffic-Adaptive Sleep Mode Management in 5G Networks

被引:11
|
作者
Masoudi, Meysam [1 ]
Khafagy, Mohammad Galal [1 ,2 ]
Soroush, Ebrahim [4 ]
Giacomelli, Daniele [3 ]
Morosi, Simone [3 ]
Cavdar, Cicek [1 ]
机构
[1] KTH Royal Inst Technol, Stockholm, Sweden
[2] Amer Univ Cairo AUC, Cairo, Egypt
[3] Univ Florence, Florence, Italy
[4] Zi Tel Co, Tehran, Iran
关键词
5G; base station sleeping; discontinuous transmission; energy efficiency; reinforcement learning;
D O I
10.1109/pimrc48278.2020.9217286
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In mobile networks, base stations (BSs) have the largest share in energy consumption. To reduce BS energy consumption, BS components with similar (de)activation times can be grouped and put into sleep during their times of inactivity. The deeper and the more energy saving a sleep mode (SM) is, the longer (de)activation time it takes to wake up, which incurs a proportional service interruption. Therefore, it is challenging to timely decide on the best SM, bearing in mind the daily traffic fluctuation and imposed service level constraints on delay/dropping. In this study, we leverage an online reinforcement learning technique, i.e., SARSA, and propose an algorithm to decide which SM to choose given time and BS load. We use real mobile traffic obtained from a BS in Stockholm to evaluate the performance of the proposed algorithm. Simulation results show that considerable energy saving can be achieved at the cost of acceptable delay, i.e., wake-up time until we serve users, compared to two lower/upper baselines, namely, fixed (non-adaptive) SMs and optimal non-causal solution.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Microgrid based VANET monitoring and energy management in 5G networks by reinforcement deep learning techniques
    Selvakumar, A.
    Ramesh, S.
    Manikandan, T.
    Michael, G.
    Arul, U.
    Gnanajeyaraman, R.
    COMPUTERS & ELECTRICAL ENGINEERING, 2023, 111
  • [22] Sensor Network Traffic-Adaptive Key Management Scheme
    Venkatesh
    Patil, Shamshekhar S.
    2009 INTERNATIONAL CONFERENCE ON ADVANCES IN RECENT TECHNOLOGIES IN COMMUNICATION AND COMPUTING (ARTCOM 2009), 2009, : 610 - 614
  • [23] Distributed Sleep Mode Power Control in 5G Ultra Dense Networks
    Bouras, Christos
    Diles, Georgios
    WIRED/WIRELESS INTERNET COMMUNICATIONS, WWIC 2017, 2017, 10372 : 65 - 76
  • [24] On self-adaptive 5G network slice QoS management system: a deep reinforcement learning approach
    Sheng-Tzong Cheng
    Chang Yu He
    Ya-Jin Lyu
    Der-Jiunn Deng
    Wireless Networks, 2023, 29 : 1269 - 1279
  • [25] On self-adaptive 5G network slice QoS management system: a deep reinforcement learning approach
    Cheng, Sheng-Tzong
    He, Chang Yu
    Lyu, Ya-Jin
    Deng, Der-Jiunn
    WIRELESS NETWORKS, 2023, 29 (03) : 1269 - 1279
  • [26] Traffic-Adaptive Spectrum Leasing Between Primary and Secondary Networks
    Tan, Xuesong Jonathan
    Zhan, Wen
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2018, 67 (07) : 6546 - 6560
  • [27] Traffic Engineered Transport for 5G Networks
    Kaippallimalil, John
    Lee, Young
    Saboorian, Tony
    Shalash, Mazin
    Kozat, Ulas
    2019 IEEE CONFERENCE ON STANDARDS FOR COMMUNICATIONS AND NETWORKING (CSCN), 2019,
  • [28] Deep Reinforcement Learning for Downlink Scheduling in 5G and Beyond Networks: A Review
    Seguin, Michael
    Omer, Anjali
    Koosha, Mohammad
    Malandra, Filippo
    Mastronarde, Nicholas
    2023 IEEE 34TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS, PIMRC, 2023,
  • [29] A Federated Reinforcement Learning Framework for Incumbent Technologies in Beyond 5G Networks
    Ali, Rashid
    Bin Zikria, Yousaf
    Garg, Sahil
    Bashir, Ali Kashif
    Obaidat, Mohammad S.
    Kim, Hyung Seok
    IEEE NETWORK, 2021, 35 (04): : 152 - 159
  • [30] Radio Resource Allocation for 5G Networks Using Deep Reinforcement Learning
    Munaye, Yirga Yayeh
    Lin, Hsin-Piao
    Lin, Ding-Bing
    Juang, Rong-Terng
    Tarekegn, Getaneh Berie
    Jeng, Shiann-Shiun
    2021 30TH WIRELESS AND OPTICAL COMMUNICATIONS CONFERENCE (WOCC 2021), 2021, : 66 - 69