Q-Learning based SFC deployment on Edge Computing Environment

被引:0
|
作者
Pandey, Suman [1 ]
Hong, James Won-Ki [1 ]
Yoo, Jae-Hyoung [1 ]
机构
[1] POSTECH, Pohang, South Korea
关键词
SFC; VNF; SDN; Edge Computing; Q-Learning; Reinforcement Learning;
D O I
10.23919/apnoms50412.2020.9236981
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Reinforcement learning (RL) has been used in various path finding applications including games, robotics and autonomous systems. Deploying Service Function Chain (SFC) with optimal path and resource utilization in edge computing environment is an important and challenging problem to solve in Software Defined Network (SDN) paradigm. In this paper we used RL based Q-Learning algorithm to find an optimal SFC deployment path in edge computing environment with limited computing and storage resources. To achieve this, our deployment scenario uses a hierarchical network structure with local, neighbor and datacenter servers. Our Q-Learning algorithm uses an intuitive reward function which does not only depend on the optimal path but also considers edge computing resource utilization and SFC length. We defined regret and empirical standard deviation as evaluation parameters. We evaluated our results by making 1200 test cases with varying SFC-length, edge resources and Virtual Network Function's (VNF) resource demand. The computation time of our algorithm varies between 0.03 similar to 0.6 seconds depending on the SFC length and resource requirement.
引用
收藏
页码:220 / 226
页数:7
相关论文
共 50 条
  • [31] Efficient service deployment in mobile edge computing environment
    Lu J.
    Li J.
    Liu W.
    Sun Q.
    Zhou A.
    Liu, Wei (liuw@bupt.edu.cn), 1600, Inderscience Publishers (16): : 126 - 146
  • [32] Optimal privacy preservation strategies with signaling Q-learning for edge-computing-based IoT resource grant systems
    Shen, Shigen
    Wu, Xiaoping
    Sun, Panjun
    Zhou, Haiping
    Wu, Zongda
    Yu, Shui
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 225
  • [33] Clustering-based Algorithm for Services Deployment in Mobile Edge Computing Environment
    Wang, Yamin
    Cao, Zhiying
    Zhang, Xiuguo
    Zhou, Huijie
    Li, Wenjia
    2019 IEEE 25TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2019, : 963 - 966
  • [34] Modification of Q-learning to Adapt to the Randomness of Environment
    Luo, Xiulian
    Gao, Youbing
    Huang, Shao
    Zhao, Yaodong
    Zhang, Shengmiao
    ICCAIS 2019: THE 8TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND INFORMATION SCIENCES, 2019,
  • [35] Q-learning with Experience Replay in a Dynamic Environment
    Pieters, Mathijs
    Wiering, Marco A.
    PROCEEDINGS OF 2016 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2016,
  • [36] A Q-learning approach based on human reasoning for navigation in a dynamic environment
    Yuan, Rupeng
    Zhang, Fuhai
    Wang, Yu
    Fu, Yili
    Wang, Shuguo
    ROBOTICA, 2019, 37 (03) : 445 - 468
  • [37] Novel Virtual Network Function Service Chain Deployment Algorithm based on Q-learning
    Xuan, Hejun
    Lu, Jun
    Li, Na
    Wang, Leijie
    IAENG International Journal of Computer Science, 2023, 50 (02)
  • [38] Autonomous Navigation based on a Q-learning algorithm for a Robot in a Real Environment
    Strauss, Clement
    Sahin, Ferat
    2008 IEEE INTERNATIONAL CONFERENCE ON SYSTEM OF SYSTEMS ENGINEERING (SOSE), 2008, : 361 - 365
  • [39] Entropy-based tuning approach for Q-learning in an unstructured environment
    Chen, Yu-Jen
    Jiang, Wei-Cheng
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2025, 187
  • [40] Multi-agent Q-learning Based Navigation in an Unknown Environment
    Nath, Amar
    Niyogi, Rajdeep
    Singh, Tajinder
    Kumar, Virendra
    ADVANCED INFORMATION NETWORKING AND APPLICATIONS, AINA-2022, VOL 1, 2022, 449 : 330 - 340