Investigating the performance of multi-objective reinforcement learning techniques in the context of IoT with harvesting energy

被引:0
|
作者
Haouari, Bakhta [1 ,2 ,3 ]
Mzid, Rania [1 ,4 ]
Mosbahi, Olfa [2 ]
机构
[1] Univ Tunis El Manar, ISI, 2 Rue Abourraihan Al Bayrouni, Ariana 2080, Tunisia
[2] Univ Carthage, LISI Lab INSAT, Ctr Urbain Nord BP 676, Tunis 1080, Tunisia
[3] Univ Carthage, Tunisia Polytech Sch, BP 743, La Marsa 2078, Tunisia
[4] Univ Sfax, CES Lab ENIS, BP w3, Sfax 3038, Tunisia
来源
JOURNAL OF SUPERCOMPUTING | 2025年 / 81卷 / 04期
关键词
IoT; Energy harvesting; Multi-objective optimization; Reinforcement learning; Scalarization; Pareto Q-learning;
D O I
10.1007/s11227-025-07010-6
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In the realm of IoT, wireless sensor networks (WSNs) play a crucial role in efficient data collection and task execution. However, energy constraints, particularly in battery-powered WSNs, present significant challenges. Energy harvesting (EH) technologies extend battery life but introduce variability that can impact quality of service (QoS). This paper introduces QoSA, a reinforcement learning (RL) agent designed to optimize QoS while adhering to energy constraints in IoT gateways. QoSA employs both single-policy and multi-policy RL methods to address trade-offs between conflicting objectives. This study investigates the performance of these methods in identifying Pareto front solutions for optimal service activation. A comparative analysis highlights the strengths and weaknesses of each proposed algorithm. Experimental results show that multi-policy methods outperform their single-policy counterparts in balancing trade-offs, demonstrating their effectiveness in real-world IoT applications.
引用
收藏
页数:49
相关论文
共 50 条
  • [21] Decomposition based Multi-Objective Evolutionary Algorithm in XCS for Multi-Objective Reinforcement Learning
    Cheng, Xiu
    Browne, Will N.
    Zhang, Mengjie
    2018 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC), 2018, : 622 - 629
  • [22] Multi-objective Energy Management for We-Energy in Energy Internet using Reinforcement Learning
    Sun, Qiuye
    Wang, Danlu
    Ma, Dazhong
    Huang, Bonan
    2017 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2017, : 1630 - 1635
  • [23] Track Learning Agent Using Multi-objective Reinforcement Learning
    Shah, Rushabh
    Ruparel, Vidhi
    Prabhu, Mukul
    D'mello, Lynette
    FOURTH CONGRESS ON INTELLIGENT SYSTEMS, VOL 1, CIS 2023, 2024, 868 : 27 - 40
  • [24] A Multi-objective Reinforcement Learning Algorithm for JS']JSSP
    Mendez-Hernandez, Beatriz M.
    Rodriguez-Bazan, Erick D.
    Martinez-Jimenez, Yailen
    Libin, Pieter
    Nowe, Ann
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: THEORETICAL NEURAL COMPUTATION, PT I, 2019, 11727 : 567 - 584
  • [25] Multi-Objective Service Composition Using Reinforcement Learning
    Moustafa, Ahmed
    Zhang, Minjie
    SERVICE-ORIENTED COMPUTING, ICSOC 2013, 2013, 8274 : 298 - 312
  • [26] Taming Lagrangian chaos with multi-objective reinforcement learning
    Calascibetta, Chiara
    Biferale, Luca
    Borra, Francesco
    Celani, Antonio
    Cencini, Massimo
    EUROPEAN PHYSICAL JOURNAL E, 2023, 46 (03):
  • [27] Multi-Objective Reinforcement Learning for Designing Ethical Environments
    Rodriguez-Soto, Manel
    Lopez-Sanchez, Maite
    Rodriguez-Aguilar, Juan A.
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 545 - 551
  • [28] Urban Driving with Multi-Objective Deep Reinforcement Learning
    Li, Changjian
    Czarnecki, Krzysztof
    AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, 2019, : 359 - 367
  • [29] A temporal difference method for multi-objective reinforcement learning
    Ruiz-Montiel, Manuela
    Mandow, Lawrence
    Perez-de-la-Cruz, Jose-Luis
    NEUROCOMPUTING, 2017, 263 : 15 - 25
  • [30] Multi-Objective Order Scheduling via Reinforcement Learning
    Chen, Sirui
    Tian, Yuming
    An, Lingling
    ALGORITHMS, 2023, 16 (11)