RL-L: A Deep Reinforcement Learning Approach Intended for AR Label Placement in Dynamic Scenarios

被引:2
|
作者
Chen, Zhutian [1 ]
Chiappalupi, Daniele [1 ,2 ]
Lin, Tica [1 ]
Yang, Yalong [3 ]
Beyer, Johanna [1 ]
Pfister, Hanspeter [1 ]
机构
[1] Harvard John A Paulson Sch Engn & Appl Sci, Boston, MA 02134 USA
[2] Swiss Fed Inst Technol, Zurich, Switzerland
[3] Virginia Tech, Blacksburg, VA USA
关键词
Three-dimensional displays; Layout; Dynamics; Visualization; Optimization; Task analysis; Sports; Augmented Reality; Reinforcement Learning; Label Placement; Dynamic Scenarios; VISUALIZATIONS;
D O I
10.1109/TVCG.2023.3326568
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Labels are widely used in augmented reality (AR) to display digital information. Ensuring the readability of AR labels requires placing them in an occlusion-free manner while keeping visual links legible, especially when multiple labels exist in the scene. Although existing optimization-based methods, such as force-based methods, are effective in managing AR labels in static scenarios, they often struggle in dynamic scenarios with constantly moving objects. This is due to their focus on generating layouts optimal for the current moment, neglecting future moments and leading to sub-optimal or unstable layouts over time. In this work, we present RL-Label, a deep reinforcement learning-based method intended for managing the placement of AR labels in scenarios involving moving objects. RL-Label considers both the current and predicted future states of objects and labels, such as positions and velocities, as well as the user's viewpoint, to make informed decisions about label placement. It balances the trade-offs between immediate and long-term objectives. We tested RL-Label in simulated AR scenarios on two real-world datasets, showing that it effectively learns the decision-making process for long-term optimization, outperforming two baselines (i.e., no view management and a force-based method) by minimizing label occlusions, line intersections, and label movement distance. Additionally, a user study involving 18 participants indicates that, within our simulated environment, RL-Label excels over the baselines in aiding users to identify, compare, and summarize data on labels in dynamic scenes.
引用
收藏
页码:1347 / 1357
页数:11
相关论文
共 50 条
  • [41] A Heterogenous IoT Attack Detection through Deep Reinforcement Learning: A Dynamic ML Approach
    Baby, Roshan
    Pooranian, Zahra
    Shojafar, Mohammad
    Tafazolli, Rahim
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 479 - 484
  • [42] Deep Reinforcement Learning Based Approach for Online Service Placement and Computation Resource Allocation in Edge Computing
    Liu, Tong
    Ni, Shenggang
    Li, Xiaoqiang
    Zhu, Yanmin
    Kong, Linghe
    Yang, Yuanyuan
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (07) : 3870 - 3881
  • [43] DSPVR: dynamic SFC placement with VNF reuse in Fog-Cloud Computing using Deep Reinforcement Learning
    Zahedi F.
    Mollahoseini Ardakani M.
    Heidary-Sharifabad A.
    Journal of Ambient Intelligence and Humanized Computing, 2023, 14 (04) : 3981 - 3994
  • [44] SLAYO-RL: A Target-Driven Deep Reinforcement Learning Approach with SLAM and YoLo for an Enhanced Autonomous Agent
    Montes, Jose
    Kohwalter, Troy Costa
    Clua, Esteban
    2023 LATIN AMERICAN ROBOTICS SYMPOSIUM, LARS, 2023 BRAZILIAN SYMPOSIUM ON ROBOTICS, SBR, AND 2023 WORKSHOP ON ROBOTICS IN EDUCATION, WRE, 2023, : 296 - 301
  • [45] A Comparison Study between Traditional and Deep-Reinforcement-Learning-Based Algorithms for Indoor Autonomous Navigation in Dynamic Scenarios
    Arce, Diego
    Solano, Jans
    Beltran, Cesar
    SENSORS, 2023, 23 (24)
  • [46] Deep reinforcement learning approach for dynamic capacity planning in decentralised regenerative medicine supply chains
    Tseng, Chin-Yuan
    Li, Junxuan
    Lin, Li-Hsiang
    Wang, Kan
    White, Chelsea C.
    Wang, Ben
    INTERNATIONAL JOURNAL OF PRODUCTION RESEARCH, 2025, 63 (02) : 555 - 570
  • [47] Deceiving Reactive Jamming in Dynamic Wireless Sensor Networks: A Deep Reinforcement Learning Based Approach
    Zhang, Chen
    Mao, Tianqi
    Xiao, Zhenyu
    Liu, Ruiqi
    Xia, Xiang-Gen
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 4455 - 4460
  • [48] A deep reinforcement learning based approach for dynamic distributed blocking flowshop scheduling with job insertions
    Sun, Xueyan
    Vogel-Heuser, Birgit
    Bi, Fandi
    Shen, Weiming
    IET COLLABORATIVE INTELLIGENT MANUFACTURING, 2022, 4 (03) : 166 - 180
  • [49] Power Optimization in Device-to-Device Communications: A Deep Reinforcement Learning Approach With Dynamic Reward
    Ji, Zelin
    Kiani, Adnan K.
    Qin, Zhijin
    Ahmad, Rizwan
    IEEE WIRELESS COMMUNICATIONS LETTERS, 2021, 10 (03) : 508 - 511
  • [50] Dynamic Offloading for Multiuser Muti-CAP MEC Networks: A Deep Reinforcement Learning Approach
    Li, Chao
    Xia, Junjuan
    Liu, Fagui
    Li, Dong
    Fan, Lisheng
    Karagiannidis, George K.
    Nallanathan, Arumugam
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2021, 70 (03) : 2922 - 2927