Predictive Energy-Aware Adaptive Sampling with Deep Reinforcement Learning

被引:2
|
作者
Heo, Seonyeong [1 ]
Mayer, Philipp [1 ]
Magno, Michele [1 ]
机构
[1] Swiss Fed Inst Technol, Dept Informat Technol & Elect Engn, Zurich, Switzerland
关键词
Adaptive sampling; energy harvesting; energy management; wireless smart sensors; reinforcement learning;
D O I
10.1109/ICECS202256217.2022.9971120
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Energy harvesting can enable wireless smart sensors to be self-sustainable by allowing them to gather energy from the environment. However, since the energy availability changes dynamically depending on the environment, it is difficult to find an optimal energy management strategy at design time. One existing approach to reflecting dynamic energy availability is energy-aware adaptive sampling, which changes the sampling rate of a sensor according to the energy state. This work proposes deep reinforcement learning-based predictive adaptive sampling for a wireless sensor node. The proposed approach applies deep reinforcement learning to find an effective adaptive sampling strategy based on the harvesting power and energy level. In addition, the proposed approach enables predictive adaptive sampling by designing adaptive sampling models that consider the trend of energy state. The evaluation results show that the predictive models can successfully manage the energy budget reflecting dynamic energy availability, maintaining a stable energy state for a up to 11.5% longer time.
引用
收藏
页数:4
相关论文
共 50 条
  • [41] Energy-Aware Adaptive Network Resource Management
    Charalambides, M.
    Tuncer, D.
    Mamatas, L.
    Pavlou, G.
    2013 IFIP/IEEE INTERNATIONAL SYMPOSIUM ON INTEGRATED NETWORK MANAGEMENT (IM 2013), 2013, : 369 - 377
  • [42] TREND in Energy-Aware Adaptive Routing Solutions
    Idzikowski, Filip
    Bonetto, Edoardo
    Chiaraviglio, Luca
    Cianfrani, Antonio
    Coiro, Angelo
    Duque, Raul
    Jimenez, Felipe
    Le Rouzic, Esther
    Musumeci, Francesco
    Van Heddeghem, Ward
    Lopez Vizcaino, Jorge
    Ye, Yabin
    IEEE COMMUNICATIONS MAGAZINE, 2013, 51 (11) : 94 - 104
  • [43] Energy-aware Scheduling for Task Adaptive FPGAs
    Loke, Wei Ting
    Koay, Chin Yang
    2016 INTERNATIONAL CONFERENCE ON FIELD-PROGRAMMABLE TECHNOLOGY (FPT), 2016, : 173 - 176
  • [44] Energy-aware task scheduling by a true online reinforcement learning in wireless sensor networks
    Khan, Muhidul Islam
    Xia, Kewen
    Ali, Ahmad
    Aslam, Nelofar
    INTERNATIONAL JOURNAL OF SENSOR NETWORKS, 2017, 25 (04) : 244 - 258
  • [45] Energy-aware Task Scheduling in Wireless Sensor Networks based on Cooperative Reinforcement Learning
    Khan, Muhidul Islam
    Rinner, Bernhard
    2014 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS (ICC), 2014, : 871 - 877
  • [46] A Two-Stage Cooperative Reinforcement Learning Scheme for Energy-Aware Computational Offloading
    Avgeris, Marios
    Mechennef, Meriem
    Leivadeas, Aris
    Lambadaris, Ioannis
    2023 IEEE 24TH INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE SWITCHING AND ROUTING, HPSR, 2023,
  • [47] Energy-aware Adaptive Multimedia for Game-based E-learning
    Ghergulescu, Ioana
    Moldovan, Arghir-Nicolae
    Muntean, Cristina Hava
    2014 IEEE INTERNATIONAL SYMPOSIUM ON BROADBAND MULTIMEDIA SYSTEMS AND BROADCASTING (BMSB), 2014,
  • [48] ASRL: An Adaptive GPS Sampling Method Using Deep Reinforcement Learning
    Qu, Boting
    Zhao, Mengjiao
    Feng, Jun
    Wang, Xin
    2022 23RD IEEE INTERNATIONAL CONFERENCE ON MOBILE DATA MANAGEMENT (MDM 2022), 2022, : 153 - 158
  • [49] Multi-Agent Deep Reinforcement Learning Framework for Renewable Energy-Aware Workflow Scheduling on Distributed Cloud Data Centers
    Jayanetti, Amanda
    Halgamuge, Saman
    Buyya, Rajkumar
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2024, 35 (04) : 604 - 615
  • [50] Coordinated Multi-Agent Deep Reinforcement Learning for Energy-Aware UAV-Based Big-Data Platforms
    Jung, Soyi
    Yun, Won Joon
    Kim, Joongheon
    Kim, Jae-Hyun
    ELECTRONICS, 2021, 10 (05) : 1 - 15