Vehicular Fog Resource Allocation Approach for VANETs Based on Deep Adaptive Reinforcement Learning Combined With Heuristic Information

被引:2
|
作者
Cheng, Yunli [1 ]
Vijayaraj, A. [2 ]
Pokkuluri, Kiran Sree [3 ]
Salehnia, Taybeh [4 ]
Montazerolghaem, Ahmadreza [5 ]
Rateb, Roqia [6 ]
机构
[1] Guangdong Polytech Sci & Trade, Sch Informat, Guangzhou 511500, Guangdong, Peoples R China
[2] RMK Engn Coll, Dept Informat Technol, Chennai 601206, Tamil Nadu, India
[3] Shri Vishnu Engn Coll Women, Dept Comp Sci & Engn, Bhimavaram 534202, India
[4] Razi Univ, Dept Comp Engn & Informat Technol, Kermanshah 6714414971, Iran
[5] Univ Isfahan, Fac Comp Engn, Esfahan 8174673441, Iran
[6] Al Ahliyya Amman Univ, Fac Informat Technol, Dept Comp Sci, Amman 19328, Jordan
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Resource management; Vehicular ad hoc networks; Computational modeling; Cloud computing; Edge computing; Computer architecture; Optimization methods; Reinforcement learning; Vehicular fog resource allocation; vehicular ad hoc networks; revised fitness-based binary battle royale optimizer; deep adaptive reinforcement learning; reward assessment; service satisfaction; service latency; ARCHITECTURE; INTERNET; LATENCY;
D O I
10.1109/ACCESS.2024.3455168
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Intelligent Transport Systems (ITS) are gradually progressing to practical application because of the rapid growth in network and information technology. Currently, the low-latency ITS requirements are hard to achieve in the conventional cloud-based Internet of Vehicles (IoV) infrastructure. In the context of IoV, Vehicular Fog Computing (VFC) has become recognized as an inventive and viable architecture that can effectively decrease the time required for the computation of diverse vehicular application activities. Vehicles receive rapid task execution services from VFC. The benefits of fog computing and vehicular cloud computing are combined in a novel concept called fog-based Vehicular Ad Hoc Networks (VANETs). These networks depend on a movable power source, so they have specific limitations. Cost-effective routing and load distribution in VANETs provide additional difficulties. In this work, a novel method is developed in vehicular applications to solve the difficulty of allocating limited fog resources and minimizing the service latency by using parked vehicles. Here, the improved heuristic algorithm called Revised Fitness-based Binary Battle Royale Optimizer (RF-BinBRO) is proposed to solve the problems of vehicular networks effectively. Additionally, the combination of Deep Adaptive Reinforcement Learning (DARL) and the improved BinBRO algorithm effectively analyzes resource allocation, vehicle parking, and movement status. Here, the parameters are tuned using the RF-BinBRO to achieve better transportation performance. To assess the performance of the proposed algorithm, simulations are carried out. The results defined that the developed VFC resource allocation model attains maximum service satisfaction compared to the traditional methods for resource allocation.
引用
收藏
页码:139056 / 139075
页数:20
相关论文
共 50 条
  • [41] Dynamic Resource Allocation in Systems-of-Systems Using a Heuristic-Based Interpretable Deep Reinforcement Learning
    Chen, Qiliang
    Heydari, Babak
    JOURNAL OF MECHANICAL DESIGN, 2022, 144 (09)
  • [42] Deep Reinforcement Learning based Adaptive Transmission Control in Vehicular Networks
    Liu, Mingyuan
    Quan, Wei
    Yu, Chengxiao
    Zhang, Xue
    Gao, Deyun
    2021 IEEE 94TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2021-FALL), 2021,
  • [43] A Stochastic Theoretical Game Approach for Resource Allocation in Vehicular Fog Computing
    Birhanie, Habtamu Mohammed
    Senouci, Sidi-Mohammed
    Messous, Mohammed Ayoub
    Arfaoui, Amel
    Kies, Ali
    2020 IEEE 17TH ANNUAL CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE (CCNC 2020), 2020,
  • [44] Deep Reinforcement Learning for Computation Offloading and Caching in Fog-Based Vehicular Networks
    Lan, Dapeng
    Taherkordi, Amir
    Eliassen, Frank
    Liu, Lei
    2020 IEEE 17TH INTERNATIONAL CONFERENCE ON MOBILE AD HOC AND SMART SYSTEMS (MASS 2020), 2020, : 622 - 630
  • [45] Deep Reinforcement Learning-Based Adaptive Computation Offloading and Power Allocation in Vehicular Edge Computing Networks
    Qiu, Bin
    Wang, Yunxiao
    Xiao, Hailin
    Zhang, Zhongshan
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (10) : 13339 - 13349
  • [46] Multi-Agent Deep Reinforcement Learning for Enhancement of Distributed Resource Allocation in Vehicular Network
    Urmonov, Odilbek
    Aliev, Hayotjon
    Kim, HyungWon
    IEEE SYSTEMS JOURNAL, 2023, 17 (01): : 491 - 502
  • [47] Dynamic Resource Allocation for Satellite Edge Computing: An Adaptive Reinforcement Learning-based Approach
    Tang, Xiaoyu
    Tang, Zhaorong
    Cui, Shuyao
    Jin, Dantong
    Qiu, Jibing
    2023 IEEE INTERNATIONAL CONFERENCE ON SATELLITE COMPUTING, SATELLITE 2023, 2023, : 55 - 56
  • [48] Deep Reinforcement Learning for Wireless Resource Allocation Using Buffer State Information
    Bansbach, Eike-Manuel
    Eliachevitch, Victor
    Schmalen, Laurent
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [49] Blockchain-Based Edge Computing Resource Allocation in IoT: A Deep Reinforcement Learning Approach
    He, Ying
    Wang, Yuhang
    Qiu, Chao
    Lin, Qiuzhen
    Li, Jianqiang
    Ming, Zhong
    IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (04) : 2226 - 2237
  • [50] Deep Reinforcement Learning for Resource Allocation in Blockchain-based Federated Learning
    Dai, Yueyue
    Yang, Huijiong
    Yang, Huiran
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 179 - 184