Event-Triggered Reinforcement Learning Based Joint Resource Allocation for Ultra-Reliable Low-Latency V2X Communications

被引:0
|
作者
Khan, Nasir [1 ]
Coleri, Sinem [1 ]
机构
[1] Koc Univ, Dept Elect & Elect Engn, TR-34450 Istanbul, Turkiye
关键词
Resource management; Reliability; Vehicle-to-everything; Ultra reliable low latency communication; Optimization; Error probability; Reliability engineering; Deep reinforcement learning (DRL); event-triggered learning; finite block length transmission; 6G networks; ultra-reliable and low-latency communications (URLLC); Vehicular networks; vehicle-to-everything (V2X) communication; SELECTION; URLLC; POWER; NETWORKING; SYSTEMS;
D O I
10.1109/TVT.2024.3424398
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Future 6G-enabled vehicular networks face the challenge of ensuring ultra-reliable low-latency communication (URLLC) for delivering safety-critical information in a timely manner. Existing resource allocation schemes for vehicle-to-everything (V2X) communication systems primarily rely on traditional optimization-based algorithms. However, these methods often fail to guarantee the strict reliability and latency requirements of URLLC applications in dynamic vehicular environments due to the high complexity and communication overhead of the solution methodologies. This paper proposes a novel deep reinforcement learning (DRL) based framework for the joint power and block length allocation to minimize the worst-case decoding-error probability in the finite block length (FBL) regime for a URLLC-based downlink V2X communication system. The problem is formulated as a non-convex mixed-integer nonlinear programming problem (MINLP). Initially, an algorithm grounded in optimization theory is developed based on deriving the joint convexity of the decoding error probability in the block length and transmit power variables within the region of interest. Subsequently, an efficient event-triggered DRL based algorithm is proposed to solve the joint optimization problem. Incorporating event-triggered learning into the DRL framework enables assessing whether to initiate the DRL process, thereby reducing the number of DRL process executions while maintaining reasonable reliability performance. The DRL framework consists of a two-layered structure. In the first layer, multiple deep Q-networks (DQNs) are established at the central trainer for block length optimization. The second layer involves an actor-critic network and utilizes the deep deterministic policy-gradient (DDPG)-based algorithm to optimize the power allocation. Simulation results demonstrate that the proposed event-triggered DRL scheme can achieve 95$\%$ of the performance of the joint optimization scheme while reducing the DRL executions by up to 24$\%$ for different network settings.
引用
收藏
页码:16991 / 17006
页数:16
相关论文
共 50 条
  • [41] Federated Learning-Based Resource Allocation for V2X Communications
    Bhardwaj, Sanjay
    Kim, Da-Hye
    Kim, Dong-Seong
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2025, 26 (01) : 382 - 396
  • [42] Joint Resource Allocation for V2X Communications With Multi-Type Mean-Field Reinforcement Learning
    Xu, Yue
    Wu, Xiao
    Tang, Yi
    Shang, Jiaxing
    Zheng, Linjiang
    Zhao, Liang
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024,
  • [43] QoS based Deep Reinforcement Learning for V2X Resource Allocation
    Bhadauria, Shubhangi
    Shabbir, Zohaib
    Roth-Mandutz, Elke
    Fischer, Georg
    2020 IEEE INTERNATIONAL BLACK SEA CONFERENCE ON COMMUNICATIONS AND NETWORKING (BLACKSEACOM), 2020,
  • [44] Edge Caching with Federated Unlearning for Low-Latency V2X Communications
    Wang, Pengfei
    Yan, Zhaohong
    Obaidat, Mohammad S.
    Yuan, Zhiwei
    Yang, Leyou
    Zhang, Junxiang
    Wei, Zongzheng
    Zhang, Qiang
    IEEE COMMUNICATIONS MAGAZINE, 2024, 62 (10) : 118 - 124
  • [45] Graph Neural Networks and Deep Reinforcement Learning-Based Resource Allocation for V2X Communications
    Ji, Maoxin
    Wu, Qiong
    Fan, Pingyi
    Cheng, Nan
    Chen, Wen
    Wang, Jiangzhou
    Letaief, Khaled B.
    IEEE INTERNET OF THINGS JOURNAL, 2025, 12 (04): : 3613 - 3628
  • [46] Resource Allocation for Low-Latency NOMA-Enabled Vehicle Platoon-Based V2X System
    Ding, Huiyi
    Leung, Ka-Cheong
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [47] Resource Allocation in V2X Communications Based on Multi-Agent Reinforcement Learning with Attention Mechanism
    Ding, Yuanfeng
    Huang, Yan
    Tang, Li
    Qin, Xizhong
    Jia, Zhenhong
    MATHEMATICS, 2022, 10 (19)
  • [48] Deep-Reinforcement-Learning-Based Mode Selection and Resource Allocation for Cellular V2X Communications
    Zhang, Xinran
    Peng, Mugen
    Yan, Shi
    Sun, Yaohua
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07) : 6380 - 6391
  • [49] Interference-Aware Radio Resource Allocation for 5G Ultra-Reliable Low-Latency Communication
    Malik, Hassan
    Alam, Muhammad Mahtab
    Le Moullec, Yannick
    Ni, Qiang
    2018 IEEE GLOBECOM WORKSHOPS (GC WKSHPS), 2018,
  • [50] A Hybrid Low-Latency D2D Resource Allocation Scheme Based on Cellular V2X Networks
    Abbasand, Fakhar
    Fan, Pingzhi
    2018 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC WORKSHOPS), 2018,