Mixture of Experts for Intelligent Networks: A Large Language Model-enabled Approach

被引:0
|
作者
Du, Hongyang [1 ]
Liu, Guangyuan [1 ]
Lin, Yijing [2 ]
Niyato, Dusit [1 ]
Kang, Jiawen [3 ,4 ,5 ]
Xiong, Zehui [6 ]
Kim, Dong In [7 ]
机构
[1] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
[2] Beijing Univ Posts & Telecommun, State Key Lab Networking & Switching Technol, Beijing 100876, Peoples R China
[3] Guangdong Univ Technol, Sch Automat, Guangzhou 510006, Peoples R China
[4] Minist Educ, Key Lab Intelligent Informat Proc & Syst Integrat, Guangzhou 510006, Peoples R China
[5] Guangdong HongKong Macao Joint Lab Smart Discrete, Guangzhou 510006, Peoples R China
[6] Singapore Univ Technol & Design, Pillar Informat Syst Technol & Design, Singapore 487372, Singapore
[7] Sungkyunkwan Univ, Dept Elect & Comp Engn, Suwon 16419, South Korea
来源
20TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE, IWCMC 2024 | 2024年
基金
新加坡国家研究基金会; 中国国家自然科学基金;
关键词
Generative AI (GAI); large language model; mixture of experts; network optimization;
D O I
10.1109/IWCMC61514.2024.10592370
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Optimizing various wireless user tasks poses a significant challenge for networking systems because of the expanding range of user requirements. Despite advancements in Deep Reinforcement Learning (DRL), the need for customized optimization tasks for individual users complicates developing and applying numerous DRL models, leading to substantial computation resource and energy consumption and can lead to inconsistent outcomes. To address this issue, we propose a novel approach utilizing a Mixture of Experts (MoE) framework, augmented with Large Language Models (LLMs), to analyze user objectives and constraints effectively, select specialized DRL experts, and weigh each decision from the participating experts. Specifically, we develop a gate network to oversee the expert models, allowing a collective of experts to tackle a wide array of new tasks. Furthermore, we innovatively substitute the traditional gate network with an LLM, leveraging its advanced reasoning capabilities to manage expert model selection for joint decisions. Our proposed method reduces the need to train new DRL models for each unique optimization problem, decreasing energy consumption and AI model implementation costs. The LLM-enabled MoE approach is validated through a general maze navigation task and a specific network service provider utility maximization task, demonstrating its effectiveness and practical applicability in optimizing complex networking systems.
引用
收藏
页码:531 / 536
页数:6
相关论文
共 50 条
  • [31] Intelligent Security Q&A System Based on Large Language Model
    Zhou, Youtao
    Lu, Qiuhong
    Fan, Haoyu
    Xiao, Yuntao
    Hu, Jinwen
    Zhang, Shimian
    2024 3RD INTERNATIONAL CONFERENCE ON ROBOTICS, ARTIFICIAL INTELLIGENCE AND INTELLIGENT CONTROL, RAIIC 2024, 2024, : 271 - 275
  • [32] Artificially Intelligent Billing in Spine Surgery: An Analysis of a Large Language Model
    Zaidat, Bashar
    Lahoti, Yash S.
    Yu, Alexander
    Mohamed, Kareem S.
    Cho, Samuel K.
    Kim, Jun S.
    GLOBAL SPINE JOURNAL, 2023,
  • [33] Large Language Model Agents Enabled Generative Design of Fluidic Computation Interfaces
    Lu, Qiuyu
    Fang, Jiawei
    Yao, Zhihao
    Yang, Yue
    Lyu, Shiqing
    Mi, Haipeng
    Yao, Lining
    PROCEEDINGS OF THE 37TH ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, UIST ADJUNCT 2024, 2024,
  • [34] Designing an artificial intelligence-enabled large language model for financial decisions
    Saxena, Anshul
    Rishi, Bikramjit
    MANAGEMENT DECISION, 2025,
  • [35] Hierarchical strategy of model partitioning for VLSI-design using an improved mixture of experts approach
    Hering, K
    Haupt, R
    Villmann, T
    TENTH WORKSHOP ON PARALLEL AND DISTRIBUTED SIMULATION - PADS 96, PROCEEDINGS, 1996, : 106 - 113
  • [36] Intelligent smelting process, management system: Efficient and intelligent management strategy by incorporating large language model
    Fu, Tianjie
    Liu, Shimin
    Li, Peiyu
    FRONTIERS OF ENGINEERING MANAGEMENT, 2024, 11 (03) : 396 - 412
  • [37] Performance of Large Intelligent Surface-enabled Cooperative Networks Over Nakagami-m Channels
    Makin, Madi
    Nauryzbayev, Galymzhan
    Arzykulov, Sultangali
    Hashmi, Mohammad S.
    2021 IEEE 94TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2021-FALL), 2021,
  • [38] Assessment of a large language model based digital intelligent assistant in assembly manufacturing
    Colabianchi, Silvia
    Costantino, Francesco
    Sabetta, Nicolo
    COMPUTERS IN INDUSTRY, 2024, 162
  • [39] Large-Language-Model-Enabled Health Management for Internet of Batteries in Electric Vehicles
    Peng, Hui
    Liu, Chenyuan
    Li, Heng
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (06): : 6082 - 6094
  • [40] Intelligent Dynamic Spectrum Allocation in MEC-Enabled Cognitive Networks: A Multiagent Reinforcement Learning Approach
    Lei, Chan
    Zhao, Haitao
    Zhou, Li
    Zhang, Jiao
    Wang, Haijun
    Chen, Haitao
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022