Deep Reinforcement Learning-Based Adaptive Beam Tracking and Resource Allocation in 6G Vehicular Networks with Switched Beam Antennas

被引:2
|
作者
Ahmed, Tahir H. [1 ]
Tiang, Jun Jiat [1 ]
Mahmud, Azwan [1 ]
Gwo Chin, Chung [1 ]
Do, Dinh-Thuan [2 ]
机构
[1] Multimedia Univ, Ctr Wireless Technol, Cyberjaya 63000, Selangor, Malaysia
[2] Asia Univ, Coll Informat & Elect Engn, Dept Comp Sci & Informat Engn, Taichung 41354, Taiwan
关键词
vehicle-to-vehicle (V2V); switched beam antenna; deep reinforcement learning; 6G communication; SECURE; V2V;
D O I
10.3390/electronics12102294
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we propose a novel switched beam antenna system model integrated with deep reinforcement learning (DRL) for 6G vehicle-to-vehicle (V2V) communications. The proposed system model aims to address the challenges of highly dynamic V2V environments, including rapid changes in channel conditions, interference, and Doppler effects. By leveraging the beam-switching capabilities of switched beam antennas and the intelligent decision making of DRL, the proposed approach enhances the performance of 6G V2V communications in terms of throughput, latency, reliability, and spectral efficiency. The proposed work develops a comprehensive mathematical model that accounts for 6G channel modeling, beam-switching, and beam-alignment errors. The Proposed DRL framework is designed to learn optimal beam-switching decisions in real time, adapting to the complex and varying V2V communication scenarios. The integration of the proposed antenna system and DRL model results in a robust solution that is capable of maintaining reliable communication links in a highly dynamic environment. To validate the proposed approach, extensive simulations were conducted and performance analysis using various performance metrics, such as throughput, latency, reliability, energy efficiency, resource utilization, and network scalability, was analyzed. Results demonstrate that the proposed system model significantly outperforms conventional V2V communication systems and other state-of-the-art techniques. Furthermore, the proposed approach shows that the beam-switching capabilities of the switched beam antenna system and the intelligent decision making of the DRL model are essential in addressing the challenges of 6G V2V communications.
引用
收藏
页数:30
相关论文
共 50 条
  • [31] Deep Reinforcement Learning-Based Computation Offloading for Mobile Edge Computing in 6G
    Sun, Haifeng
    Wang, Jiawei
    Yong, Dongping
    Qin, Mingwei
    Zhang, Ning
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (04) : 7482 - 7493
  • [32] Learning-Based Resource Allocation for Backscatter-Aided Vehicular Networks
    Khan, Wali Ullah
    Nguyen, Tu N.
    Jameel, Furqan
    Jamshed, Muhammad Ali
    Pervaiz, Haris
    Javed, Muhammad Awais
    Jantti, Riku
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (10) : 19676 - 19690
  • [33] Optimizing Resource Allocation for 6G NOMA-Enabled Cooperative Vehicular Networks
    Ali, Zain
    Khan, Wali Ullah
    Ihsan, Asim
    Waqar, Omer
    Sidhu, Guftaar Ahmad Sardar
    Kumar, Neeraj
    IEEE OPEN JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS, 2021, 2 : 269 - 281
  • [34] Federated Reinforcement Learning-Based Resource Allocation for D2D-Aided Digital Twin Edge Networks in 6G Industrial IoT
    Guo, Qi
    Tang, Fengxiao
    Kato, Nei
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (05) : 7228 - 7236
  • [35] Resource allocation strategy for vehicular communication networks based on multi-agent deep reinforcement learning
    Liu, Zhibin
    Deng, Yifei
    VEHICULAR COMMUNICATIONS, 2025, 53
  • [36] Task Offloading and Resource Allocation in Vehicular Networks: A Lyapunov-Based Deep Reinforcement Learning Approach
    Kumar, Anitha Saravana
    Zhao, Lian
    Fernando, Xavier
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (10) : 13360 - 13373
  • [37] Multiagent Deep-Reinforcement-Learning-Based Resource Allocation for Heterogeneous QoS Guarantees for Vehicular Networks
    Tian, Jie
    Liu, Qianqian
    Zhang, Haixia
    Wu, Dalei
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (03): : 1683 - 1695
  • [38] Resource Allocation in MEC-enabled Vehicular Networks: A Deep Reinforcement Learning Approach
    Tan, Guoping
    Zhang, Huipeng
    Zhou, Siyuan
    IEEE INFOCOM 2020 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2020, : 406 - 411
  • [39] Computation Migration and Resource Allocation in Heterogeneous Vehicular Networks: A Deep Reinforcement Learning Approach
    Wang, Hui
    Ke, Hongchang
    Liu, Gang
    Sun, Weijia
    IEEE ACCESS, 2020, 8 : 171140 - 171153
  • [40] SPIN: Simulated Poisoning and Inversion Network for Federated Learning-Based 6G Vehicular Networks
    Khowaja, Sunder Ali
    Khuwaja, Parus
    Dev, Kapal
    Antonopoulos, Angelos
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 6205 - 6210