Reinforcement Learning for value-based Placement of Fog Services

被引:0
|
作者
Poltronieri, Filippo [1 ]
Tortonesi, Mauro [1 ]
Stefanelli, Cesare [1 ]
Suri, Niranjan [2 ,3 ]
机构
[1] Univ Ferrara, Distributed Syst Res Grp, Ferrara, Italy
[2] Florida Inst Human & Machine Cognit IHMC, Pensacola, FL USA
[3] US Army Res Lab ARL, Adelphi, MD USA
关键词
Fog Computing; Service Management; Reinforcement Learning; ALLOCATION; MODEL;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Optimal service and resource management in Fog Computing is an active research area in academia. In fact, to fulfill the promise to enable a new generation of immersive, adaptive, and context-aware services, Fog Computing requires novel solutions capable of better exploiting the available computational and network resources at the edge. Resource management in Fog Computing could particularly benefit from self-* approaches capable of learning the best resource allocation strategies to adapt to the ever changing conditions. In this context, Reinforcement Learning (RL), a technique that allows to train software agents to learn which actions maximize a reward, represents a compelling solution to investigate. In this paper, we explore RL as an optimization method for the value-based management of Fog services over a pool of Fog nodes. More specifically, we propose FogReinForce, a solution based on Deep Q-Network (DQN) algorithm that learns to select the allocation for service components that maximizes the value-based utility provided by those services.
引用
收藏
页码:466 / 472
页数:7
相关论文
共 50 条
  • [1] Reinforcement Learning Based Scheme for On-Demand Vehicular Fog Formation and Micro Services Placement
    Nsouli, Ahmad
    Mourad, Azzam
    El-Hajj, Wassim
    2022 INTERNATIONAL WIRELESS COMMUNICATIONS AND MOBILE COMPUTING, IWCMC, 2022, : 1244 - 1249
  • [2] Phileas: A Simulation-based Approach for the Evaluation of Value-based Fog Services
    Poltronieri, Filippo
    Stefanelli, Cesare
    Suri, Niranjan
    Tortonesi, Mauro
    2018 IEEE 23RD INTERNATIONAL WORKSHOP ON COMPUTER AIDED MODELING AND DESIGN OF COMMUNICATION LINKS AND NETWORKS (CAMAD), 2018, : 210 - 215
  • [3] A reinforcement learning diffusion decision model for value-based decisions
    Laura Fontanesi
    Sebastian Gluth
    Mikhail S. Spektor
    Jörg Rieskamp
    Psychonomic Bulletin & Review, 2019, 26 : 1099 - 1121
  • [4] The impact of environmental stochasticity on value-based multiobjective reinforcement learning
    Vamplew, Peter
    Foale, Cameron
    Dazeley, Richard
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (03): : 1783 - 1799
  • [5] Value-Based Reinforcement Learning for Digital Twins in Cloud Computing
    Van-Phuc Bui
    Pandey, Shashi Raj
    de Sant Ana, Pedro M.
    Popovski, Petar
    ICC 2024 - IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2024, : 1413 - 1418
  • [6] A reinforcement learning diffusion decision model for value-based decisions
    Fontanesi, Laura
    Gluth, Sebastian
    Spektor, Mikhail S.
    Rieskamp, Joerg
    PSYCHONOMIC BULLETIN & REVIEW, 2019, 26 (04) : 1099 - 1121
  • [7] The impact of environmental stochasticity on value-based multiobjective reinforcement learning
    Peter Vamplew
    Cameron Foale
    Richard Dazeley
    Neural Computing and Applications, 2022, 34 : 1783 - 1799
  • [8] Advances in Value-based, Policy-based, and Deep Learning-based Reinforcement Learning
    Byeon, Haewon
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2023, 14 (08) : 348 - 354
  • [9] Learning to discover value: Value-based pricing and selling capabilities for services and solutions
    Raja, Jawwad Z.
    Frandsen, Thomas
    Kowalkowski, Christian
    Jarmatz, Martin
    JOURNAL OF BUSINESS RESEARCH, 2020, 114 : 142 - 159
  • [10] Sparse distributed memories for on-line value-based reinforcement learning
    Ratitch, B
    Precup, D
    MACHINE LEARNING: ECML 2004, PROCEEDINGS, 2004, 3201 : 347 - 358