Multi-Agent-Deep-Reinforcement-Learning-Enabled Offloading Scheme for Energy Minimization in Vehicle-to-Everything Communication Systems

被引:2
|
作者
Duan, Wenwen [1 ]
Li, Xinmin [2 ,3 ]
Huang, Yi [4 ]
Cao, Hui [1 ]
Zhang, Xiaoqiang [1 ]
机构
[1] Southwest Univ Sci & Technol, Sch Informat Engn, Mianyang 621000, Peoples R China
[2] Chengdu Univ, Coll Comp Sci, Chengdu 610100, Peoples R China
[3] Chinese Univ Hong Kong, Guangdong Prov Key Lab Future Networks Intelligenc, Shenzhen 518172, Peoples R China
[4] Tongji Univ, Dept Informat & Commun Engn, Shanghai 201804, Peoples R China
基金
中国国家自然科学基金;
关键词
vehicle-to-everything; mobile edge computing; offloading; transmit power; deep reinforcement learning;
D O I
10.3390/electronics13030663
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Offloading computation-intensive tasks to mobile edge computing (MEC) servers, such as road-side units (RSUs) and a base station (BS), can enhance the computation capacities of the vehicle-to-everything (V2X) communication system. In this work, we study an MEC-assisted multi-vehicle V2X communication system in which multi-antenna RSUs with liner receivers and a multi-antenna BS with a zero-forcing (ZF) receiver work as MEC servers jointly to offload the tasks of the vehicles. To control the energy consumption and ensure the delay requirement of the V2X communication system, an energy consumption minimization problem under a delay constraint is formulated. The multi-agent deep reinforcement learning (MADRL) algorithm is proposed to solve the non-convex energy optimization problem, which can train vehicles to select the beneficial server association, transmit power and offloading ratio intelligently according to the reward function related to the delay and energy consumption. The improved K-nearest neighbors (KNN) algorithm is proposed to assign vehicles to the specific RSU, which can reduce the action space dimensions and the complexity of the MADRL algorithm. Numerical simulation results show that the proposed scheme can decrease energy consumption while satisfying the delay constraint. When the RSUs adopt the indirect transmission mode and are equipped with matched-filter (MF) receivers, the proposed joint optimization scheme can decrease the energy consumption by 56.90% and 65.52% compared to the maximum transmit power and full offloading schemes, respectively. When the RSUs are equipped with ZF receivers, the proposed scheme can decrease the energy consumption by 36.8% compared to the MF receivers.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] UAV-Assisted Multi-Object Computing Offloading for Blockchain-Enabled Vehicle-to-Everything Systems
    Chen, Ting
    Wang, Shujiao
    Fan, Xin
    Zhang, Xiujuan
    Luo, Chuanwen
    Hong, Yi
    Computers, Materials and Continua, 2024, 81 (03): : 3927 - 3950
  • [2] Deep Learning Object Detection for Vehicle-to-Everything Systems with ROS 2.0 Distributed Communication Architecture
    Cheng, Chia-Hsin
    Huang, Kuo-Ting
    Huang, Yung-Fa
    Xu, Yi-Xuan
    Liao, Kai-Siang
    2024 11TH INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS-TAIWAN, ICCE-TAIWAN 2024, 2024, : 527 - 528
  • [3] Blockchain and Federated Reinforcement Learning for Vehicle-to-Everything Energy Trading in Smart Grids
    Moniruzzaman M.
    Yassine A.
    Benlamri R.
    IEEE Transactions on Artificial Intelligence, 5 (02): : 839 - 853
  • [4] Security of 6G-Enabled Vehicle-to-Everything Communication in Emerging Federated Learning and Blockchain Technologies
    Kim, Myoungsu
    Oh, Insu
    Yim, Kangbin
    Sahlabadi, Mahdi
    Shukur, Zarina
    IEEE ACCESS, 2024, 12 : 33972 - 34001
  • [5] Vehicle Edge Computing Task Offloading Strategy Based on Multi-Agent Deep Reinforcement Learning
    Bo, Jianxiong
    Zhao, Xu
    JOURNAL OF GRID COMPUTING, 2025, 23 (02)
  • [6] Multi-agent deep reinforcement learning for task offloading in group distributed manufacturing systems
    Xiong, Jianyu
    Guo, Peng
    Wang, Yi
    Meng, Xiangyin
    Zhang, Jian
    Qian, Linmao
    Yu, Zhenglin
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 118
  • [7] Complexity-Efficient Sidelink Synchronization Signal Detection Scheme for Cellular Vehicle-to-Everything Communication Systems
    You, Young-Hwan
    Jung, Yong-An
    MATHEMATICS, 2023, 11 (18)
  • [8] AoI Minimization for UAV-to-Device Underlay Communication by Multi-agent Deep Reinforcement Learning
    Wu, Fanyi
    Zhang, Hongliang
    Wu, Jianjun
    Song, Lingyang
    Han, Zhu
    Poor, H. Vincent
    2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,
  • [9] Hierarchical Multi-Agent Deep Reinforcement Learning for Energy-Efficient Hybrid Computation Offloading
    Zhou, Hang
    Long, Yusi
    Gong, Shimin
    Zhu, Kun
    Hoang, Dinh Thai
    Niyato, Dusit
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (01) : 986 - 1001
  • [10] Distributed Task Offloading based on Multi-Agent Deep Reinforcement Learning
    Hu, Shucheng
    Ren, Tao
    Niu, Jianwei
    Hu, Zheyuan
    Xing, Guoliang
    2021 17TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING (MSN 2021), 2021, : 575 - 583