Harnessing Meta-Reinforcement Learning for Enhanced Tracking in Geofencing Systems

被引:0
|
作者
Famili, Alireza [1 ]
Sun, Shihua [2 ]
Atalay, Tolga [2 ]
Stavrou, Angelos [1 ,2 ]
机构
[1] WayWave Inc, Arlington, VA 22203 USA
[2] Virginia Tech, Dept Elect & Comp Engn, Arlington, VA 22203 USA
关键词
5G mobile communication; Accuracy; Three-dimensional displays; Geometry; Drones; Distance measurement; Wireless fidelity; NP-hard problem; Mixed reality; Metaverse; Geofencing; tracking; meta-RL; sensor placement; 5G networks; LOCALIZATION; NETWORK;
D O I
10.1109/OJCOMS.2025.3531318
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Geofencing technologies have become pivotal in creating virtual boundaries for both real and virtual environments, offering a secure means to control and monitor designated areas. They are now considered essential tools for defining and controlling boundaries across various applications, from aviation safety in drone management to access control within mixed reality platforms like the metaverse. Effective geofencing relies heavily on precise tracking capabilities, a critical component for maintaining the integrity and functionality of these systems. Leveraging the advantages of 5G technology, including its large bandwidth and extensive accessibility, presents a promising solution to enhance geofencing performance. In this paper, we introduce MetaFence: Meta-Reinforcement Learning for Geofencing Enhancement, a novel approach for precise geofencing utilizing indoor 5G small cells, termed "5G Points", which are optimally deployed using a meta-reinforcement learning (meta-RL) framework. Our proposed meta-RL method addresses the NP-hard problem of determining an optimal placement of 5G Points to minimize spatial geometry-induced errors. Moreover, the meta-training approach enables the learned policy to quickly adapt to diverse new environments. We devised a comprehensive test campaign to evaluate the performance of MetaFence. Our results demonstrate that this strategic placement significantly improves tracking accuracy compared to traditional methods. Furthermore, we show that the meta-training strategy enables the learned policy to generalize effectively and perform efficiently when faced with new environments.
引用
收藏
页码:944 / 960
页数:17
相关论文
共 50 条
  • [1] Meta-Reinforcement Learning by Tracking Task Non-stationarity
    Poiani, Riccardo
    Tirinzoni, Andrea
    Restelli, Marcello
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2899 - 2905
  • [2] Hypernetworks in Meta-Reinforcement Learning
    Beck, Jacob
    Jackson, Matthew
    Vuorio, Risto
    Whiteson, Shimon
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 1478 - 1487
  • [3] Uncertainty-based Meta-Reinforcement Learning for Robust Radar Tracking
    Ott, Julius
    Servadei, Lorenzo
    Mauro, Gianfranco
    Stadelmayer, Thomas
    Santra, Avik
    Wille, Robert
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 1476 - 1483
  • [4] Meta-Reinforcement Learning for Adaptive Control of Second Order Systems
    McClement, Daniel G.
    Lawrence, Nathan P.
    Forbes, Michael G.
    Loewen, Philip D.
    Backstrom, Johan U.
    Gopaluni, R. Bhushan
    2022 IEEE INTERNATIONAL SYMPOSIUM ON ADVANCED CONTROL OF INDUSTRIAL PROCESSES (ADCONIP 2022), 2022, : 78 - 83
  • [5] Autonomous Obstacle Avoidance and Target Tracking of UAV Based on Meta-Reinforcement Learning
    Jiang W.
    Wu J.
    Wang Y.
    Hunan Daxue Xuebao/Journal of Hunan University Natural Sciences, 2022, 49 (06): : 101 - 109
  • [6] Prefrontal cortex as a meta-reinforcement learning system
    Jane X. Wang
    Zeb Kurth-Nelson
    Dharshan Kumaran
    Dhruva Tirumala
    Hubert Soyer
    Joel Z. Leibo
    Demis Hassabis
    Matthew Botvinick
    Nature Neuroscience, 2018, 21 : 860 - 868
  • [7] Offline Meta-Reinforcement Learning for Industrial Insertion
    Zhao, Tony Z.
    Luo, Jianlan
    Sushkov, Oleg
    Pevceviciute, Rugile
    Heess, Nicolas
    Scholz, Jon
    Schaal, Stefan
    Levine, Sergey
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 6386 - 6393
  • [8] A Meta-Reinforcement Learning Approach to Process Control
    McClement, Daniel G.
    Lawrence, Nathan P.
    Loewen, Philip D.
    Forbes, Michael G.
    Backstrom, Johan U.
    Gopaluni, R. Bhushan
    IFAC PAPERSONLINE, 2021, 54 (03): : 685 - 692
  • [9] Unsupervised Curricula for Visual Meta-Reinforcement Learning
    Jabri, Allan
    Hsu, Kyle
    Eysenbach, Benjamin
    Gupta, Abhishek
    Levine, Sergey
    Finn, Chelsea
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [10] Meta-Reinforcement Learning of Structured Exploration Strategies
    Gupta, Abhishek
    Mendonca, Russell
    Liu, YuXuan
    Abbeel, Pieter
    Levine, Sergey
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31