Vehicular Fog Resource Allocation Approach for VANETs Based on Deep Adaptive Reinforcement Learning Combined With Heuristic Information

被引:2
|
作者
Cheng, Yunli [1 ]
Vijayaraj, A. [2 ]
Pokkuluri, Kiran Sree [3 ]
Salehnia, Taybeh [4 ]
Montazerolghaem, Ahmadreza [5 ]
Rateb, Roqia [6 ]
机构
[1] Guangdong Polytech Sci & Trade, Sch Informat, Guangzhou 511500, Guangdong, Peoples R China
[2] RMK Engn Coll, Dept Informat Technol, Chennai 601206, Tamil Nadu, India
[3] Shri Vishnu Engn Coll Women, Dept Comp Sci & Engn, Bhimavaram 534202, India
[4] Razi Univ, Dept Comp Engn & Informat Technol, Kermanshah 6714414971, Iran
[5] Univ Isfahan, Fac Comp Engn, Esfahan 8174673441, Iran
[6] Al Ahliyya Amman Univ, Fac Informat Technol, Dept Comp Sci, Amman 19328, Jordan
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Resource management; Vehicular ad hoc networks; Computational modeling; Cloud computing; Edge computing; Computer architecture; Optimization methods; Reinforcement learning; Vehicular fog resource allocation; vehicular ad hoc networks; revised fitness-based binary battle royale optimizer; deep adaptive reinforcement learning; reward assessment; service satisfaction; service latency; ARCHITECTURE; INTERNET; LATENCY;
D O I
10.1109/ACCESS.2024.3455168
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Intelligent Transport Systems (ITS) are gradually progressing to practical application because of the rapid growth in network and information technology. Currently, the low-latency ITS requirements are hard to achieve in the conventional cloud-based Internet of Vehicles (IoV) infrastructure. In the context of IoV, Vehicular Fog Computing (VFC) has become recognized as an inventive and viable architecture that can effectively decrease the time required for the computation of diverse vehicular application activities. Vehicles receive rapid task execution services from VFC. The benefits of fog computing and vehicular cloud computing are combined in a novel concept called fog-based Vehicular Ad Hoc Networks (VANETs). These networks depend on a movable power source, so they have specific limitations. Cost-effective routing and load distribution in VANETs provide additional difficulties. In this work, a novel method is developed in vehicular applications to solve the difficulty of allocating limited fog resources and minimizing the service latency by using parked vehicles. Here, the improved heuristic algorithm called Revised Fitness-based Binary Battle Royale Optimizer (RF-BinBRO) is proposed to solve the problems of vehicular networks effectively. Additionally, the combination of Deep Adaptive Reinforcement Learning (DARL) and the improved BinBRO algorithm effectively analyzes resource allocation, vehicle parking, and movement status. Here, the parameters are tuned using the RF-BinBRO to achieve better transportation performance. To assess the performance of the proposed algorithm, simulations are carried out. The results defined that the developed VFC resource allocation model attains maximum service satisfaction compared to the traditional methods for resource allocation.
引用
收藏
页码:139056 / 139075
页数:20
相关论文
共 50 条
  • [31] Deep Learning-based Containerization Resource Management in Vehicular Fog Computing
    Yan, Liangliang
    Zhang, Min
    Song, Chuang
    Wang, Danshi
    Li, Jin
    Guan, Luyao
    2019 ASIA COMMUNICATIONS AND PHOTONICS CONFERENCE (ACP), 2019,
  • [32] Resource Allocation in Fog RAN for Heterogeneous IoT Environments based on Reinforcement Learning
    Nassar, Almuthanna
    Yilmaz, Yasin
    ICC 2019 - 2019 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2019,
  • [33] Content Driven and Reinforcement Learning Based Resource Allocation Scheme in Vehicular Network
    Chen, Jiujiu
    Guo, Caili
    Feng, Chunyan
    Zhu, Meiyi
    Sun, Qizheng
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2021), 2021,
  • [34] Deep Reinforcement Learning Based Resource Allocation for Heterogeneous Networks
    Yang, Helin
    Zhao, Jun
    Lam, Kwok-Yan
    Garg, Sahil
    Wu, Qingqing
    Xiong, Zehui
    2021 17TH INTERNATIONAL CONFERENCE ON WIRELESS AND MOBILE COMPUTING, NETWORKING AND COMMUNICATIONS (WIMOB 2021), 2021, : 253 - 258
  • [35] Network Resource Allocation Strategy Based on Deep Reinforcement Learning
    Zhang, Shidong
    Wang, Chao
    Zhang, Junsan
    Duan, Youxiang
    You, Xinhong
    Zhang, Peiying
    IEEE OPEN JOURNAL OF THE COMPUTER SOCIETY, 2020, 1 (01): : 86 - 94
  • [36] Resource allocation algorithm for MEC based on Deep Reinforcement Learning
    Wang, Yijie
    Chen, Xin
    Chen, Ying
    Du, Shougang
    2021 IEEE INTERNATIONAL PERFORMANCE, COMPUTING, AND COMMUNICATIONS CONFERENCE (IPCCC), 2021,
  • [37] Deep Reinforcement Learning-Based Adaptive Beam Tracking and Resource Allocation in 6G Vehicular Networks with Switched Beam Antennas
    Ahmed, Tahir H.
    Tiang, Jun Jiat
    Mahmud, Azwan
    Gwo Chin, Chung
    Do, Dinh-Thuan
    ELECTRONICS, 2023, 12 (10)
  • [38] A Deep Learning Based Resource Allocation Scheme in Vehicular Communication Systems
    Chen, Mimi
    Chen, Jiajun
    Chen, Xiaojing
    Zhang, Shunqing
    Xu, Shugong
    2019 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE (WCNC), 2019,
  • [39] Vehicular Fog Resource Allocation Scheme: A Multi-Objective Optimization based Approach
    Mekki, Tesnim
    Jmal, Rihab
    Jabri, Issam
    Chaari, Lamia
    Rachedi, Abderrezak
    2020 IEEE 17TH ANNUAL CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE (CCNC 2020), 2020,
  • [40] Deep reinforcement learning-based joint task offloading and resource allocation in multipath transmission vehicular networks
    Yin, Chenyang
    Zhang, Yuyang
    Dong, Ping
    Zhang, Hongke
    TRANSACTIONS ON EMERGING TELECOMMUNICATIONS TECHNOLOGIES, 2024, 35 (01)