Routing Edge-Cloud Requests via Multi-Objective Reinforcement Learning

被引:0
|
作者
Pichot, Simon [1 ,2 ]
Conan, Vania [1 ]
Khalife, Hicham [3 ]
Beylot, Andre-Luc [2 ]
Jakllari, Gentian [2 ]
机构
[1] Thales SIX GTS FRANCE, Gennevilliers, Iledefrance, France
[2] Univ Toulouse, Toulouse INP, CNRS, UT3,IRIT, Toulouse, France
[3] ERICSSON, Stockholm, Sweden
关键词
Edge clouds; Energy consumption; Quality of service; Q-learning; MEC;
D O I
10.1109/IWCMC61514.2024.10592386
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Edge cloud technologies enable faster and more energy-efficient processing of user requests on servers located at the network's edge. However, when compared to the cloud, edge resources are significantly more limited, prompting a critical question: as user requests arrive, how to determine which ones to allocate to the edge and which ones to the cloud. Additionally, to ensure a quality of service that considers task priorities, it is essential to reserve available resources for potential high-priority requests. This introduces a fundamental trade-off: reserving excessive computing resources at the edge, in anticipation of high-priority requests, is incompatible with reducing energy consumption. Conversely, allocating all edge resources may push higher-priority, potentially more delay-sensitive tasks into the cloud. To tackle this intricate problem, we present a multi-objective reinforcement learning approach utilizing a modified Q-learning process. This approach facilitates intelligent resource allocation, striking a careful balance between fulfilling high-priority requests and minimizing unused processing capacity. While the multi-objective agent ensures the satisfaction of high-priority services, it optimally utilizes the edge resources. Additionally, we introduce a framework for adapting the two cost components to specific requirements, favoring one objective over the other, and examining their impact on relevant metrics. Results demonstrate that this specialized strategy outperforms a conventional First Come First Served baseline, as well as a myopic strategy, which represents the unmodified Q-learning process, across these metrics.
引用
收藏
页码:861 / 866
页数:6
相关论文
共 50 条
  • [41] Multi-Task Multi-Objective Evolutionary Search Based on Deep Reinforcement Learning for Multi-Objective Vehicle Routing Problems with Time Windows
    Deng, Jianjun
    Wang, Junjie
    Wang, Xiaojun
    Cai, Yiqiao
    Liu, Peizhong
    SYMMETRY-BASEL, 2024, 16 (08):
  • [42] Multi-Objective Routing and Resource Allocation Based on Reinforcement Learning in Optical Transport Networks
    Li, Xin
    Zhao, Yongli
    Li, Yajie
    Rahman, Sabidur
    Wang, Feng
    Li, Xinghua
    Zhang, Jie
    2020 ASIA COMMUNICATIONS AND PHOTONICS CONFERENCE (ACP) AND INTERNATIONAL CONFERENCE ON INFORMATION PHOTONICS AND OPTICAL COMMUNICATIONS (IPOC), 2020,
  • [43] Meta-Learning for Multi-objective Reinforcement Learning
    Chen, Xi
    Ghadirzadeh, Ali
    Bjorkman, Marten
    Jensfelt, Patric
    2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 977 - 983
  • [44] Multi-objective Reinforcement Learning for Responsive Grids
    Perez, Julien
    Germain-Renaud, Cecile
    Kegl, Balazs
    Loomis, Charles
    JOURNAL OF GRID COMPUTING, 2010, 8 (03) : 473 - 492
  • [45] Special issue on multi-objective reinforcement learning
    Drugan, Madalina
    Wiering, Marco
    Vamplew, Peter
    Chetty, Madhu
    NEUROCOMPUTING, 2017, 263 : 1 - 2
  • [46] A multi-objective deep reinforcement learning framework
    Thanh Thi Nguyen
    Ngoc Duy Nguyen
    Vamplew, Peter
    Nahavandi, Saeid
    Dazeley, Richard
    Lim, Chee Peng
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2020, 96
  • [47] A Constrained Multi-Objective Reinforcement Learning Framework
    Huang, Sandy H.
    Abdolmaleki, Abbas
    Vezzani, Giulia
    Brakel, Philemon
    Mankowitz, Daniel J.
    Neunert, Michael
    Bohez, Steven
    Tassa, Yuval
    Heess, Nicolas
    Riedmiller, Martin
    Hadsell, Raia
    CONFERENCE ON ROBOT LEARNING, VOL 164, 2021, 164 : 883 - 893
  • [48] Multi-objective Reinforcement Learning for Responsive Grids
    Julien Perez
    Cécile Germain-Renaud
    Balazs Kégl
    Charles Loomis
    Journal of Grid Computing, 2010, 8 : 473 - 492
  • [49] Pedestrian simulation as multi-objective reinforcement learning
    Ravichandran, Naresh Balaji
    Yang, Fangkai
    Peters, Christopher
    Lansner, Anders
    Herman, Pawel
    18TH ACM INTERNATIONAL CONFERENCE ON INTELLIGENT VIRTUAL AGENTS (IVA'18), 2018, : 307 - 312
  • [50] Multi-Objective Deep Reinforcement Learning for Efficient Workload Orchestration in Extreme Edge Computing
    Safavifar, Zahra
    Gyamfi, Eric
    Mangina, Eleni
    Golpayegani, Fatemeh
    IEEE ACCESS, 2024, 12 : 74558 - 74571