Resource Allocation and Control Co-aware Smart Computation Offloading for Blockchain-Enabled IoT

被引:0
|
作者
Chen S.-G. [1 ]
Wang Q. [1 ]
Zhang H.-J. [2 ]
Wang K. [3 ]
机构
[1] Jiangsu Key Lab of Broadband Wireless Communication and Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing
[2] Department of Communication Engineering, University of Science and Technology Beijing, Beijing
[3] Department of Electrical and Computer Engineering, University of California Los Angeles, Los Angeles, 90095, CA
来源
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Blockchain; Computation offloading; Deep reinforcement learning; Fog computing; Resource allocation;
D O I
10.11897/SP.J.1016.2022.00472
中图分类号
学科分类号
摘要
Under the scenario of big data, the remote cloud server is usually deployed for data processing and value mining, but in the face of delay-sensitive or dynamic and frequent update applications, this processing paradigm appears to be inadequate from reality. As a complement to cloud computing paradigm, fog computing has attracted great attention for which can effectively reduce task processing delay, energy and bandwidth consumptions. At the same time, fog computing based computation offloading mechanism has become a research focus, because it can effectively alleviate the processing burden of nodes and improve the experience of users. Under the fog computing paradigm, in order to better meet the requirements of delay and energy consumption for computation-intensive tasks, based on the blockchain- enabled Internet of Things (IoT) scenario, this paper proposes a resource allocation and control co-aware smart computation offloading scheme. Specifically, an optimization problem is formulated to minimize the total cost of all tasks under the constraints of delay, energy consumption, and communication and computation resources. Based on the comprehensive consideration, the component of the total cost includes the delay, energy consumption and resource mining costs; it can achieve the minimization of the total cost by jointly optimizing the communication resource, computation resource and offloading decision. In order to inspire the active participation of terminals and the fog node in the computation offloading process, and more close to the needs of the real scenario, an incentive mechanism is designed in this paper. That is, to complete the offloading of tasks, the terminal mines (rents) computation resource from the fog node as a miner and the fog node charges a certain fee according to the needed resource of terminal. For the terminals that successfully obtain the resources to complete the tasks efficiently, the system will allocate the corresponding rewards according to the occupation ratios of gained computation resource, which ensures the fairness allocation of the rewards for the successful miners. This mechanism enables that the fog node and terminals both can win benefits in the computation offloading process, which promotes their collaboration. Meanwhile, this blockchain-based incentive mechanism guarantees the security of the transaction process. For the sake of solving the above formulated optimization problem (i.e., a mixed integer nonlinear programming problem), we propose a communication, computation and control co-aware smart computation offloading algorithm (3CC-SCO). By integrating the concept of deep deterministic policy gradient (DDPG) algorithm, our algorithm designs an inverting gradient update based double actor-critic neural networks structure to improve the stable and convergence rate in the training process. At the same time, it is more suitable for solving the mixed integer optimization problem by adopting probabilistic discrete operation of continuous action output. Finally, the simulation results demonstrate that the proposed scheme can converge to the optimal value quickly, and the total cost of the proposed scheme is the lowest compared with other three benchmark schemes, for example, as compared with the best- performing scheme, i.e., deep Q-learning network (DQN) based computation offloading scheme, the total cost can be reduced by an average of 15.2%. © 2022, Science Press. All right reserved.
引用
收藏
页码:472 / 484
页数:12
相关论文
共 41 条
  • [1] Hong Z, Chen W, Huang H, Et al., Multi-hop cooperative computation offloading for industrial IoT-edge-cloud computing environments, IEEE Transactions on Parallel and Distributed Systems, 30, 12, pp. 2759-2774, (2019)
  • [2] Aazam M, Zeadally S, Harras K., Deploying fog computing in industrial Internet of Things and industry 4.0, IEEE Transactions on Industrial Informatics, 14, 10, pp. 4674-4682, (2018)
  • [3] Pan J, McElhannon J., Future edge cloud and edge computing for Internet of Things applications, IEEE Internet of Things Journal, 5, 1, pp. 439-449, (2018)
  • [4] Ma X, Wang S, Zhang S, Et al., Cost-efficient resource provisioning for dynamic requests in cloud assisted mobile edge computing, IEEE Transactions on Cloud Computing, 9, 3, pp. 968-980, (2021)
  • [5] Chen S, Zhu X, Zhang H, Et al., Efficient privacy preserving data collection and computation offloading for fog-assisted IoT, IEEE Transactions on Sustainable Computing, 5, 4, pp. 526-540, (2020)
  • [6] Li S, Tao Y, Qin X, Et al., Energy-aware mobile edge computation offloading for IoT over heterogeneous networks, IEEE Access, 7, pp. 13092-13105, (2019)
  • [7] Wang J, Feng D, Zhang S, Et al., Computation offloading for mobile edge computing enabled vehicular networks, IEEE Access, 7, pp. 62624-62632, (2019)
  • [8] Li L, Ota K, Dong M, Et al., Deep learning for smart industry: efficient manufacture inspection system with fog computing, IEEE Transactions on Industrial Informatics, 14, 10, pp. 4665-4673, (2018)
  • [9] Yu S, Wang X, Langar R., Computation offloading for mobile edge computing: a deep learning approach, Proceedings of the IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, pp. 1-6, (2017)
  • [10] Qi Q, Wang J, Ma Z, Et al., Knowledge-driven service offloading decision for vehicular edge computing: a deep reinforcement learning approach, IEEE Transactions on Vehicular Technology, 68, 5, pp. 4192-4203, (2019)