Dynamic Multi-Sleeping Control with Diverse Quality-of-Service Requirements in Sixth-Generation Networks Using Federated Learning

被引:1
|
作者
Pan, Tianzhu [1 ]
Wu, Xuanli [1 ]
Li, Xuesong [1 ]
机构
[1] Harbin Inst Technol, Sch Elect & Informat Engn, Harbin 150001, Peoples R China
基金
中国国家自然科学基金;
关键词
6G; network energy saving; base station sleeping; federated learning;
D O I
10.3390/electronics13030549
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The intensive deployment of sixth-generation (6G) base stations is expected to greatly enhance network service capabilities, offering significantly higher throughput and lower latency compared to previous generations. However, this advancement is accompanied by a notable increase in the number of network elements, leading to increased power consumption. This not only worsens carbon emissions but also significantly raises operational costs for network operators. To address the challenges arising from this surge in network energy consumption, there is a growing focus on innovative energy-saving technologies designed for 6G networks. These technologies involve strategies for dynamically adjusting the operational status of base stations, such as activating sleep modes during periods of low demand, to optimize energy use while maintaining network performance and efficiency. Furthermore, integrating artificial intelligence into the network's operational framework is being explored to establish a more energy-efficient, sustainable, and cost-effective 6G network. In this paper, we propose a small base station sleeping control scheme in heterogeneous dense small cell networks based on federated reinforcement learning, which enables the small base stations to dynamically enter appropriate sleep modes, to reduce power consumption while ensuring users' quality-of-service (QoS) requirements. In our scheme, double deep Q-learning is used to solve the complex non-convex base station sleeping control problem. To tackle the dynamic changes in QoS requirements caused by user mobility, small base stations share local models with the macro base station, which acts as the central control unit, via the X2 interface. The macro base station aggregates local models into a global model and then distributes the global model to each base station for the next round of training. By alternately performing model training, aggregation, and updating, each base station in the network can dynamically adapt to changes in QoS requirements brought about by user mobility. Simulations show that compared with methods based on distributed deep Q-learning, our proposed scheme effectively reduces the performance fluctuations caused by user handover and achieves lower network energy consumption while guaranteeing users' QoS requirements.
引用
收藏
页数:19
相关论文
共 14 条
  • [11] Reinforcement Learning based Multi-Attribute Slice Admission Control for Next-Generation Networks in a Dynamic Pricing Environment
    Ferreira, Victor C.
    Esmat, H. H.
    Lorenzo, Beatriz
    Kundu, Sandip
    Franca, Felipe M. G.
    2022 IEEE 95TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-SPRING), 2022,
  • [12] Automatic generation control of multi-area power systems with diverse energy sources using Teaching Learning Based Optimization algorithm
    Sahu, Rabindra Kumar
    Gorripotu, Tulasichandra Sekhar
    Panda, Sidhartha
    ENGINEERING SCIENCE AND TECHNOLOGY-AN INTERNATIONAL JOURNAL-JESTECH, 2016, 19 (01): : 113 - 134
  • [13] Dynamic network slicing orchestration in open 5G networks using multi-criteria decision making and secure federated learning techniques
    Kholidy, Hisham A.
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2025, 28 (04):
  • [14] Dynamic traffic signal control using mean field multi-agent reinforcement learning in large scale road-networks
    Hu, Tianfeng
    Hu, Zhiqun
    Lu, Zhaoming
    Wen, Xiangming
    IET INTELLIGENT TRANSPORT SYSTEMS, 2023, 17 (09) : 1715 - 1728