Stateless Q-learning algorithm for service caching in resource constrained edge environment

被引:2
|
作者
Huang, Binbin [1 ]
Ran, Ziqi [1 ]
Yu, Dongjin [1 ]
Xiang, Yuanyuan [1 ]
Shi, Xiaoying [1 ]
Li, Zhongjin [1 ]
Xu, Zhengqian [1 ]
机构
[1] Hangzhou Dianzi Univ, Sch Comp, Hangzhou 310018, Peoples R China
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
Edge environment; service caching; Stateless Q-learning; Collaboration cost; Service latency;
D O I
10.1186/s13677-023-00506-7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In resource constrained edge environment, multiple service providers can compete to rent the limited resources to cache their service instances on edge servers close to end users, thereby significantly reducing the service delay and improving quality of service (QoS). However, service providers renting the resources of different edge servers to deploy their service instances can incur different resource usage costs and service delay. To make full use of the limited resources of the edge servers to further reduce resource usage costs, multiple service providers on an edge server can form a coalition and share the limited resource of an edge server. In this paper, we investigate the service caching problem of multiple service providers in resource constrained edge environment, and propose an independent learners-based services caching scheme (ILSCS) which adopts a stateless Q-learning to learn an optimal service caching scheme. To verify the effectiveness of ILSCS scheme, we implement COALITION, RANDOM, MDU, and MCS four baseline algorithms, and compare the total collaboration cost and service latency of ILSCS scheme with these of these four baseline algorithms under different experimental parameter settings. The extensive experimental results show that the ILSCS scheme can achieve lower total collaboration cost and service latency.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] Stateless Q-Learning Algorithm for Training of Radial Basis Function Based Neural Networks in Medical Data Classification
    Kusy, Maciej
    Zajdel, Roman
    INTELLIGENT SYSTEMS IN TECHNICAL AND MEDICAL DIAGNOSTICS, 2014, 230 : 267 - 278
  • [22] ENHANCEMENTS OF FUZZY Q-LEARNING ALGORITHM
    Glowaty, Grzegorz
    COMPUTER SCIENCE-AGH, 2005, 7 : 77 - 87
  • [23] An analysis of the pheromone Q-learning algorithm
    Monekosso, N
    Remagnino, P
    ADVANCES IN ARTIFICIAL INTELLIGENCE - IBERAMIA 2002, PROCEEDINGS, 2002, 2527 : 224 - 232
  • [24] A Weighted Smooth Q-Learning Algorithm
    Vijesh, V. Antony
    Shreyas, S. R.
    IEEE CONTROL SYSTEMS LETTERS, 2025, 9 : 21 - 26
  • [25] An improved immune Q-learning algorithm
    Ji, Zhengqiao
    Wu, Q. M. Jonathan
    Sid-Ahmed, Maher
    2007 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS, VOLS 1-8, 2007, : 3330 - +
  • [26] Multi-Agent Cooperation Q-Learning Algorithm Based on Constrained Markov Game
    Ge, Yangyang
    Zhu, Fei
    Huang, Wei
    Zhao, Peiyao
    Liu, Quan
    COMPUTER SCIENCE AND INFORMATION SYSTEMS, 2020, 17 (02) : 647 - 664
  • [27] Towards cost-effective service migration in mobile edge: A Q-learning approach
    Wang, Yang
    Cao, Shan
    Ren, Hongshuai
    Li, Jianjun
    Ye, Kejiang
    Xu, Chengzhong
    Chen, Xi
    JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2020, 146 : 175 - 188
  • [28] A Service-Centric Q-Learning Algorithm for Mobility Robustness Optimization in LTE
    Luisa Mari-Altozano, Maria
    Mwanje, Stephen S.
    Luna Ramirez, Salvador
    Toril, Matias
    Sanneck, Henning
    Gijon, Carolina
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2021, 18 (03): : 3541 - 3555
  • [29] A Fast Deep Q-learning Network Edge Cloud Migration Strategy for Vehicular Service
    Peng Jun
    Wang Chenglong
    Jiang Fu
    Gu Xin
    Mu Yueyue
    Liu Weirong
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2020, 42 (01) : 58 - 64
  • [30] Low load DIDS task scheduling based on Q-learning in edge computing environment
    Zhao, Xu
    Huang, Guangqiu
    Gao, Ling
    Li, Maozhen
    Gao, Quanli
    JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2021, 188