RL-L: A Deep Reinforcement Learning Approach Intended for AR Label Placement in Dynamic Scenarios

被引:2
|
作者
Chen, Zhutian [1 ]
Chiappalupi, Daniele [1 ,2 ]
Lin, Tica [1 ]
Yang, Yalong [3 ]
Beyer, Johanna [1 ]
Pfister, Hanspeter [1 ]
机构
[1] Harvard John A Paulson Sch Engn & Appl Sci, Boston, MA 02134 USA
[2] Swiss Fed Inst Technol, Zurich, Switzerland
[3] Virginia Tech, Blacksburg, VA USA
关键词
Three-dimensional displays; Layout; Dynamics; Visualization; Optimization; Task analysis; Sports; Augmented Reality; Reinforcement Learning; Label Placement; Dynamic Scenarios; VISUALIZATIONS;
D O I
10.1109/TVCG.2023.3326568
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Labels are widely used in augmented reality (AR) to display digital information. Ensuring the readability of AR labels requires placing them in an occlusion-free manner while keeping visual links legible, especially when multiple labels exist in the scene. Although existing optimization-based methods, such as force-based methods, are effective in managing AR labels in static scenarios, they often struggle in dynamic scenarios with constantly moving objects. This is due to their focus on generating layouts optimal for the current moment, neglecting future moments and leading to sub-optimal or unstable layouts over time. In this work, we present RL-Label, a deep reinforcement learning-based method intended for managing the placement of AR labels in scenarios involving moving objects. RL-Label considers both the current and predicted future states of objects and labels, such as positions and velocities, as well as the user's viewpoint, to make informed decisions about label placement. It balances the trade-offs between immediate and long-term objectives. We tested RL-Label in simulated AR scenarios on two real-world datasets, showing that it effectively learns the decision-making process for long-term optimization, outperforming two baselines (i.e., no view management and a force-based method) by minimizing label occlusions, line intersections, and label movement distance. Additionally, a user study involving 18 participants indicates that, within our simulated environment, RL-Label excels over the baselines in aiding users to identify, compare, and summarize data on labels in dynamic scenes.
引用
收藏
页码:1347 / 1357
页数:11
相关论文
共 50 条
  • [21] A deep reinforcement learning approach for dynamic task scheduling of flight tests
    Tian, Bei
    Xiao, Gang
    Shen, Yu
    JOURNAL OF SUPERCOMPUTING, 2024, 80 (13): : 18761 - 18796
  • [22] UAV navigation in high dynamic environments:A deep reinforcement learning approach
    Tong GUO
    Nan JIANG
    Biyue LI
    Xi ZHU
    Ya WANG
    Wenbo DU
    Chinese Journal of Aeronautics , 2021, (02) : 479 - 489
  • [23] A dynamic penalty approach to state constraint handling in deep reinforcement learning
    Yoo, Haeun
    Zavala, Victor M.
    Lee, Jay H.
    JOURNAL OF PROCESS CONTROL, 2022, 115 : 157 - 166
  • [24] Online Fault-tolerant VNF Chain Placement: A Deep Reinforcement Learning Approach
    Mao, Weixi
    Wang, Lei
    Zhao, Jin
    Xu, Yuedong
    2020 IFIP NETWORKING CONFERENCE AND WORKSHOPS (NETWORKING), 2020, : 163 - 171
  • [25] A Deep Reinforcement Learning Approach for the Placement of Scalable Microservices in the Edge-to-Cloud Continuum
    Maia, Adyson Magalh Aes
    Ghamri-Doudane, Yacine
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 479 - 485
  • [26] Dynamic Cache Placement and Trajectory Design for UAV-Assisted Networks: A Two-Timescale Deep Reinforcement Learning Approach
    Liu, Binghong
    Liu, Chenxi
    Peng, Mugen
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (04) : 5516 - 5530
  • [27] A Meta Reinforcement Learning Approach for SFC Placement in Dynamic IoT-MEC Networks
    Guo, Shuang
    Du, Yarong
    Liu, Liang
    APPLIED SCIENCES-BASEL, 2023, 13 (17):
  • [28] Dynamic clustering of software defined network switches and controller placement using deep reinforcement learning
    Bouzidi, El Hocine
    Outtagarts, Abdelkader
    Langar, Rami
    Boutaba, Raouf
    COMPUTER NETWORKS, 2022, 207
  • [29] Dynamic Multi-Objective Service Function Chain Placement Based on Deep Reinforcement Learning
    Zhou, Cong
    Zhao, Baokang
    Tang, Fengxiao
    Han, Biao
    Wang, Baosheng
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2025, 22 (01): : 15 - 29
  • [30] Dynamic Resource Aware VNF Placement with Deep Reinforcement Learning for 5G Networks
    Dalgkitsis, Anestis
    Mekikis, Prodromos-Vasileios
    Antonopoulos, Angelos
    Kormentzas, Georgios
    Verikoukis, Christos
    2020 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2020,