Vehicular Fog Resource Allocation Approach for VANETs Based on Deep Adaptive Reinforcement Learning Combined With Heuristic Information

被引:2
|
作者
Cheng, Yunli [1 ]
Vijayaraj, A. [2 ]
Pokkuluri, Kiran Sree [3 ]
Salehnia, Taybeh [4 ]
Montazerolghaem, Ahmadreza [5 ]
Rateb, Roqia [6 ]
机构
[1] Guangdong Polytech Sci & Trade, Sch Informat, Guangzhou 511500, Guangdong, Peoples R China
[2] RMK Engn Coll, Dept Informat Technol, Chennai 601206, Tamil Nadu, India
[3] Shri Vishnu Engn Coll Women, Dept Comp Sci & Engn, Bhimavaram 534202, India
[4] Razi Univ, Dept Comp Engn & Informat Technol, Kermanshah 6714414971, Iran
[5] Univ Isfahan, Fac Comp Engn, Esfahan 8174673441, Iran
[6] Al Ahliyya Amman Univ, Fac Informat Technol, Dept Comp Sci, Amman 19328, Jordan
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Resource management; Vehicular ad hoc networks; Computational modeling; Cloud computing; Edge computing; Computer architecture; Optimization methods; Reinforcement learning; Vehicular fog resource allocation; vehicular ad hoc networks; revised fitness-based binary battle royale optimizer; deep adaptive reinforcement learning; reward assessment; service satisfaction; service latency; ARCHITECTURE; INTERNET; LATENCY;
D O I
10.1109/ACCESS.2024.3455168
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Intelligent Transport Systems (ITS) are gradually progressing to practical application because of the rapid growth in network and information technology. Currently, the low-latency ITS requirements are hard to achieve in the conventional cloud-based Internet of Vehicles (IoV) infrastructure. In the context of IoV, Vehicular Fog Computing (VFC) has become recognized as an inventive and viable architecture that can effectively decrease the time required for the computation of diverse vehicular application activities. Vehicles receive rapid task execution services from VFC. The benefits of fog computing and vehicular cloud computing are combined in a novel concept called fog-based Vehicular Ad Hoc Networks (VANETs). These networks depend on a movable power source, so they have specific limitations. Cost-effective routing and load distribution in VANETs provide additional difficulties. In this work, a novel method is developed in vehicular applications to solve the difficulty of allocating limited fog resources and minimizing the service latency by using parked vehicles. Here, the improved heuristic algorithm called Revised Fitness-based Binary Battle Royale Optimizer (RF-BinBRO) is proposed to solve the problems of vehicular networks effectively. Additionally, the combination of Deep Adaptive Reinforcement Learning (DARL) and the improved BinBRO algorithm effectively analyzes resource allocation, vehicle parking, and movement status. Here, the parameters are tuned using the RF-BinBRO to achieve better transportation performance. To assess the performance of the proposed algorithm, simulations are carried out. The results defined that the developed VFC resource allocation model attains maximum service satisfaction compared to the traditional methods for resource allocation.
引用
收藏
页码:139056 / 139075
页数:20
相关论文
共 50 条
  • [21] Deep-Reinforcement-Learning-Based Resource Allocation for Content Distribution in Fog Radio Access Networks
    Fang, Chao
    Xu, Hang
    Yang, Yihui
    Hu, Zhaoming
    Tu, Shanshan
    Ota, Kaoru
    Yang, Zheng
    Dong, Mianxiong
    Han, Zhu
    Yu, F. Richard
    Liu, Yunjie
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (18) : 16874 - 16883
  • [22] Resource allocation strategy for vehicular communication networks based on multi-agent deep reinforcement learning
    Liu, Zhibin
    Deng, Yifei
    VEHICULAR COMMUNICATIONS, 2025, 53
  • [23] Adaptive Resource Allocation for Mobile Edge Computing in Internet of Vehicles: A Deep Reinforcement Learning Approach
    Zhao, Junhui
    Quan, Haoyu
    Xia, Minghua
    Wang, Dongming
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (04) : 5834 - 5848
  • [24] Multiagent Deep-Reinforcement-Learning-Based Resource Allocation for Heterogeneous QoS Guarantees for Vehicular Networks
    Tian, Jie
    Liu, Qianqian
    Zhang, Haixia
    Wu, Dalei
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (03): : 1683 - 1695
  • [25] Deep reinforcement learning-based joint optimization model for vehicular task offloading and resource allocation
    Li, Zhi-Yuan
    Zhang, Zeng-Xiang
    PEER-TO-PEER NETWORKING AND APPLICATIONS, 2024, 17 (04) : 2001 - 2015
  • [26] Deep Reinforcement Learning-Based Resource Allocation for Integrated Sensing, Communication, and Computation in Vehicular Network
    Yang, Liu
    Wei, Yifei
    Feng, Zhiyong
    Zhang, Qixun
    Han, Zhu
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (12) : 18608 - 18622
  • [27] Adaptive Resource Allocation Considering Power-Consumption Outage: A Deep Reinforcement Learning Approach
    Luo, Jia
    Chen, Qianbin
    Tang, Lun
    Zhang, Zhicai
    Li, Yu
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2023, 72 (06) : 8111 - 8116
  • [28] Reinforcement Learning for Adaptive Resource Allocation in Fog RAN for IoT With Heterogeneous Latency Requirements
    Nassar, Almuthanna
    Yilmaz, Yasin
    IEEE ACCESS, 2019, 7 : 128014 - 128025
  • [29] Research on Resource Allocation Method of Space Information Networks Based on Deep Reinforcement Learning
    Meng, Xiangli
    Wu, Lingda
    Yu, Shaobo
    REMOTE SENSING, 2019, 11 (04)
  • [30] Task Offloading and Resource Allocation for Fog Computing in NG Wireless Networks: A Federated Deep Reinforcement Learning Approach
    Su, Chan
    Wei, Jianguo
    Lin, Deyu
    Kong, Linghe
    Guan, Yong Liang
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (04): : 6802 - 6816