Multi-Agent-Deep-Reinforcement-Learning-Enabled Offloading Scheme for Energy Minimization in Vehicle-to-Everything Communication Systems

被引:2
|
作者
Duan, Wenwen [1 ]
Li, Xinmin [2 ,3 ]
Huang, Yi [4 ]
Cao, Hui [1 ]
Zhang, Xiaoqiang [1 ]
机构
[1] Southwest Univ Sci & Technol, Sch Informat Engn, Mianyang 621000, Peoples R China
[2] Chengdu Univ, Coll Comp Sci, Chengdu 610100, Peoples R China
[3] Chinese Univ Hong Kong, Guangdong Prov Key Lab Future Networks Intelligenc, Shenzhen 518172, Peoples R China
[4] Tongji Univ, Dept Informat & Commun Engn, Shanghai 201804, Peoples R China
基金
中国国家自然科学基金;
关键词
vehicle-to-everything; mobile edge computing; offloading; transmit power; deep reinforcement learning;
D O I
10.3390/electronics13030663
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Offloading computation-intensive tasks to mobile edge computing (MEC) servers, such as road-side units (RSUs) and a base station (BS), can enhance the computation capacities of the vehicle-to-everything (V2X) communication system. In this work, we study an MEC-assisted multi-vehicle V2X communication system in which multi-antenna RSUs with liner receivers and a multi-antenna BS with a zero-forcing (ZF) receiver work as MEC servers jointly to offload the tasks of the vehicles. To control the energy consumption and ensure the delay requirement of the V2X communication system, an energy consumption minimization problem under a delay constraint is formulated. The multi-agent deep reinforcement learning (MADRL) algorithm is proposed to solve the non-convex energy optimization problem, which can train vehicles to select the beneficial server association, transmit power and offloading ratio intelligently according to the reward function related to the delay and energy consumption. The improved K-nearest neighbors (KNN) algorithm is proposed to assign vehicles to the specific RSU, which can reduce the action space dimensions and the complexity of the MADRL algorithm. Numerical simulation results show that the proposed scheme can decrease energy consumption while satisfying the delay constraint. When the RSUs adopt the indirect transmission mode and are equipped with matched-filter (MF) receivers, the proposed joint optimization scheme can decrease the energy consumption by 56.90% and 65.52% compared to the maximum transmit power and full offloading schemes, respectively. When the RSUs are equipped with ZF receivers, the proposed scheme can decrease the energy consumption by 36.8% compared to the MF receivers.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Reward-Guided Individualised Communication for Deep Reinforcement Learning in Multi-Agent Systems
    Lin, Yi-Yu
    Zeng, Xiao-Jun
    ADVANCES IN COMPUTATIONAL INTELLIGENCE SYSTEMS, UKCI 2023, 2024, 1453 : 79 - 94
  • [22] Deep Reinforcement Learning Based Task-Oriented Communication in Multi-Agent Systems
    He, Guojun
    Feng, Mingjie
    Zhang, Yu
    Liu, Guanghua
    Dai, Yueyue
    Jiang, Tao
    IEEE WIRELESS COMMUNICATIONS, 2023, 30 (03) : 112 - 119
  • [23] Deep Hierarchical Communication Graph in Multi-Agent Reinforcement Learning
    Liu, Zeyang
    Wan, Lipeng
    Sui, Xue
    Chen, Zhuoran
    Sun, Kewu
    Lan, Xuguang
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 208 - 216
  • [24] Multi-Agent Deep Reinforcement Learning for Walker Systems
    Park, Inhee
    Moh, Teng-Sheng
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 490 - 495
  • [25] Intelligent Vehicle Computation Offloading in Vehicular Ad Hoc Networks: A Multi-Agent LSTM Approach with Deep Reinforcement Learning
    Sun, Dingmi
    Chen, Yimin
    Li, Hao
    MATHEMATICS, 2024, 12 (03)
  • [26] Optimizing Autonomous Vehicle Communication through an Adaptive Vehicle-to-Everything (AV2X) Model: A Distributed Deep Learning Approach
    Osman, Radwa Ahmed
    ELECTRONICS, 2023, 12 (19)
  • [27] A Joint Trajectory and Computation Offloading Scheme for UAV-MEC Networks via Multi-Agent Deep Reinforcement Learning
    Du, Xinyang
    Li, Xuanheng
    Zhao, Nan
    Wang, Xianbin
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 5438 - 5443
  • [28] Computation Offloading in Energy Harvesting Systems via Continuous Deep Reinforcement Learning
    Zhang, Jing
    Du, Jun
    Jiang, Chunxiao
    Shen, Yuan
    Wang, Jian
    ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [29] Computation offloading in blockchain-enabled MCS systems: A scalable deep reinforcement learning approach
    Chen, Zheyi
    Zhang, Junjie
    Huang, Zhiqin
    Wang, Pengfei
    Yu, Zhengxin
    Miao, Wang
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 153 : 301 - 311
  • [30] Distributed Task Offloading for Large-Scale VEC Systems: A Multi-agent Deep Reinforcement Learning Method
    Lu, Yanfei
    Han, Dengyu
    Wang, Xiaoxuan
    Gao, Qinghe
    2022 14TH INTERNATIONAL CONFERENCE ON COMMUNICATION SOFTWARE AND NETWORKS (ICCSN 2022), 2022, : 161 - 165