RL-L: A Deep Reinforcement Learning Approach Intended for AR Label Placement in Dynamic Scenarios

被引:2
|
作者
Chen, Zhutian [1 ]
Chiappalupi, Daniele [1 ,2 ]
Lin, Tica [1 ]
Yang, Yalong [3 ]
Beyer, Johanna [1 ]
Pfister, Hanspeter [1 ]
机构
[1] Harvard John A Paulson Sch Engn & Appl Sci, Boston, MA 02134 USA
[2] Swiss Fed Inst Technol, Zurich, Switzerland
[3] Virginia Tech, Blacksburg, VA USA
关键词
Three-dimensional displays; Layout; Dynamics; Visualization; Optimization; Task analysis; Sports; Augmented Reality; Reinforcement Learning; Label Placement; Dynamic Scenarios; VISUALIZATIONS;
D O I
10.1109/TVCG.2023.3326568
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Labels are widely used in augmented reality (AR) to display digital information. Ensuring the readability of AR labels requires placing them in an occlusion-free manner while keeping visual links legible, especially when multiple labels exist in the scene. Although existing optimization-based methods, such as force-based methods, are effective in managing AR labels in static scenarios, they often struggle in dynamic scenarios with constantly moving objects. This is due to their focus on generating layouts optimal for the current moment, neglecting future moments and leading to sub-optimal or unstable layouts over time. In this work, we present RL-Label, a deep reinforcement learning-based method intended for managing the placement of AR labels in scenarios involving moving objects. RL-Label considers both the current and predicted future states of objects and labels, such as positions and velocities, as well as the user's viewpoint, to make informed decisions about label placement. It balances the trade-offs between immediate and long-term objectives. We tested RL-Label in simulated AR scenarios on two real-world datasets, showing that it effectively learns the decision-making process for long-term optimization, outperforming two baselines (i.e., no view management and a force-based method) by minimizing label occlusions, line intersections, and label movement distance. Additionally, a user study involving 18 participants indicates that, within our simulated environment, RL-Label excels over the baselines in aiding users to identify, compare, and summarize data on labels in dynamic scenes.
引用
收藏
页码:1347 / 1357
页数:11
相关论文
共 50 条
  • [1] A Deep Reinforcement Learning Approach to Sensor Placement under Uncertainty
    Jabini, Amin
    Johnson, Erik A.
    IFAC PAPERSONLINE, 2022, 55 (27): : 178 - 183
  • [2] MEC-Based Dynamic Controller Placement in SD-IoV: A Deep Reinforcement Learning Approach
    Li, Bo
    Deng, Xiaoheng
    Chen, Xuechen
    Deng, Yiqin
    Yin, Jian
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (09) : 10044 - 10058
  • [3] Dynamic VNF Scheduling: A Deep Reinforcement Learning Approach
    Zhang, Zixiao
    He, Fujun
    Oki, Eiji
    IEICE TRANSACTIONS ON COMMUNICATIONS, 2023, E106B (07) : 557 - 570
  • [4] Optimized Deep Reinforcement Learning Approach for Dynamic System
    Tan, Ziya
    Karakose, Mehmet
    2020 6TH IEEE INTERNATIONAL SYMPOSIUM ON SYSTEMS ENGINEERING (IEEE ISSE 2020), 2020,
  • [5] Teleconsultation dynamic scheduling with a deep reinforcement learning approach
    Chen, Wenjia
    Li, Jinlin
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 149
  • [6] Fragmentation-Aware VNF Placement: A Deep Reinforcement Learning Approach
    Mohamed, Ramy
    Avgeris, Marios
    Leivadeas, Aris
    Lambadaris, Ioannis
    ICC 2024 - IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2024, : 5257 - 5262
  • [7] A Heuristically Assisted Deep Reinforcement Learning Approach for Network Slice Placement
    Esteves, Jose Jurandir Alves
    Boubendir, Amina
    Guillemin, Fabrice
    Sens, Pierre
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2022, 19 (04): : 4794 - 4806
  • [8] An Enhanced Reinforcement Learning Approach for Dynamic Placement of Virtual Network Functions
    Houidi, Omar
    Soualah, Oussama
    Louati, Wajdi
    Zeghlache, Djamal
    2020 IEEE 31ST ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO COMMUNICATIONS (IEEE PIMRC), 2020,
  • [9] A Dynamic Service Placement Based on Deep Reinforcement Learning in Mobile Edge Computing
    Lu, Shuaibing
    Wu, Jie
    Shi, Jiamei
    Lu, Pengfan
    Fang, Juan
    Liu, Haiming
    NETWORK, 2022, 2 (01): : 106 - 122
  • [10] A Deep Reinforcement Learning Approach for Dynamic Contents Caching in HetNets
    Ma, Manyou
    Wong, Vincent W. S.
    ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,