Multi-Agent-Deep-Reinforcement-Learning-Enabled Offloading Scheme for Energy Minimization in Vehicle-to-Everything Communication Systems

被引:2
|
作者
Duan, Wenwen [1 ]
Li, Xinmin [2 ,3 ]
Huang, Yi [4 ]
Cao, Hui [1 ]
Zhang, Xiaoqiang [1 ]
机构
[1] Southwest Univ Sci & Technol, Sch Informat Engn, Mianyang 621000, Peoples R China
[2] Chengdu Univ, Coll Comp Sci, Chengdu 610100, Peoples R China
[3] Chinese Univ Hong Kong, Guangdong Prov Key Lab Future Networks Intelligenc, Shenzhen 518172, Peoples R China
[4] Tongji Univ, Dept Informat & Commun Engn, Shanghai 201804, Peoples R China
基金
中国国家自然科学基金;
关键词
vehicle-to-everything; mobile edge computing; offloading; transmit power; deep reinforcement learning;
D O I
10.3390/electronics13030663
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Offloading computation-intensive tasks to mobile edge computing (MEC) servers, such as road-side units (RSUs) and a base station (BS), can enhance the computation capacities of the vehicle-to-everything (V2X) communication system. In this work, we study an MEC-assisted multi-vehicle V2X communication system in which multi-antenna RSUs with liner receivers and a multi-antenna BS with a zero-forcing (ZF) receiver work as MEC servers jointly to offload the tasks of the vehicles. To control the energy consumption and ensure the delay requirement of the V2X communication system, an energy consumption minimization problem under a delay constraint is formulated. The multi-agent deep reinforcement learning (MADRL) algorithm is proposed to solve the non-convex energy optimization problem, which can train vehicles to select the beneficial server association, transmit power and offloading ratio intelligently according to the reward function related to the delay and energy consumption. The improved K-nearest neighbors (KNN) algorithm is proposed to assign vehicles to the specific RSU, which can reduce the action space dimensions and the complexity of the MADRL algorithm. Numerical simulation results show that the proposed scheme can decrease energy consumption while satisfying the delay constraint. When the RSUs adopt the indirect transmission mode and are equipped with matched-filter (MF) receivers, the proposed joint optimization scheme can decrease the energy consumption by 56.90% and 65.52% compared to the maximum transmit power and full offloading schemes, respectively. When the RSUs are equipped with ZF receivers, the proposed scheme can decrease the energy consumption by 36.8% compared to the MF receivers.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] A Distributed Deep Reinforcement Learning-based Optimization Scheme for Vehicle Edge Computing Task Offloading
    Li, Bingxian
    Zhu, Lin
    Tan, Long
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 218 - 223
  • [32] Trajectory Design and Bandwidth Assignment for UAVs-enabled Communication Network with Multi-Agent Deep Reinforcement Learning
    Wang, Weijian
    Lin, Yun
    2021 IEEE 94TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2021-FALL), 2021,
  • [33] Optimization for computational offloading in multi-access edge computing: A deep reinforcement learning scheme
    Wang, Jian
    Ke, Hongchang
    Liu, Xuejie
    Wang, Hui
    Computer Networks, 2022, 204
  • [34] Hierarchical Multi-Agent Deep Reinforcement Learning for Backscatter-aided Data Offloading
    Zhou, Hang
    Long, Yusi
    Zhang, Wenjie
    Xu, Jing
    Gong, Shimin
    2022 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2022, : 542 - 547
  • [35] Optimization for computational offloading in multi-access edge computing: A deep reinforcement learning scheme
    Wang, Jian
    Ke, Hongchang
    Liu, Xuejie
    Wang, Hui
    COMPUTER NETWORKS, 2022, 204
  • [36] Multi-agent Computation Offloading in UAV Assisted MEC via Deep Reinforcement Learning
    He, Hang
    Ren, Tao
    Qiu, Yuan
    Hu, Zheyuan
    Li, Yanqi
    SMART COMPUTING AND COMMUNICATION, 2022, 13202 : 416 - 426
  • [37] Multi-agent Deep Reinforcement Learning Aided Computing Offloading in LEO Satellite Networks
    Lai, Junyu
    Liu, Huashuo
    Sun, Yusong
    Tan, Huidong
    Gan, Lianqiang
    Chen, Zhiyong
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 3438 - 3443
  • [38] Multi-Agent Deep Reinforcement Learning for Efficient Computation Offloading in Mobile Edge Computing
    Jiao, Tianzhe
    Feng, Xiaoyue
    Guo, Chaopeng
    Wang, Dongqi
    Song, Jie
    CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 76 (03): : 3585 - 3603
  • [39] Multi-Agent Deep Reinforcement Learning for Cooperative Offloading in Cloud-Edge Computing
    Suzuki, Akito
    Kobayashi, Masahiro
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 3660 - 3666
  • [40] Decentralized Computation Offloading with Cooperative UAVs: Multi-Agent Deep Reinforcement Learning Perspective
    Hwang, Sangwon
    Lee, Hoon
    Park, Juseong
    Lee, Inkyu
    IEEE WIRELESS COMMUNICATIONS, 2022, 29 (04) : 24 - 31