Haisor: Human-aware Indoor Scene Optimization via Deep Reinforcement Learning

被引:1
|
作者
Sun, Jia-Mu [1 ]
Yang, Jie [1 ]
Mo, Kaichun [2 ]
Lai, Yu-Kun [3 ]
Guibas, Leonidas [2 ]
Gao, Lin [1 ]
机构
[1] Chinese Acad Sci, Beijing Key Lab Mobile Comp & Pervas Device, Inst Comp Technol, Beijing 100190, Peoples R China
[2] Stanford Univ, Dept Comp Sci, 450 Serra Mall, Stanford, CA 94305 USA
[3] Cardiff Univ, Sch Comp Sci & Informat, Cardiff CF10 3AT, Wales
来源
ACM TRANSACTIONS ON GRAPHICS | 2024年 / 43卷 / 02期
基金
中国国家自然科学基金;
关键词
Scene optimization; scene synthesis; human aware; reinforcement learning; Monte Carlo search; robot simulation; imitation learning; REARRANGEMENT;
D O I
10.1145/3632947
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
3D scene synthesis facilitates and benefits many real-world applications. Most scene generators focus on making indoor scenes plausible via learning from training data and leveraging extra constraints such as adjacency and symmetry. Although the generated 3D scenes are mostly plausible with visually realistic layouts, they can be functionally unsuitable for human users to navigate and interact with furniture. Our key observation is that human activity plays a critical role and sufficient free space is essential for human-scene interactions. This is exactly where many existing synthesized scenes fail-the seemingly correct layouts are often not fit for living. To tackle this, we present a human-aware optimization framework Haisor for 3D indoor scene arrangement via reinforcement learning, which aims to find an action sequence to optimize the indoor scene layout automatically. Based on the hierarchical scene graph representation, an optimal action sequence is predicted and performed via Deep Q-Learning with Monte Carlo Tree Search (MCTS), where MCTS is our key feature to search for the optimal solution in long-term sequences and large action space. Multiple human-aware rewards are designed as our core criteria of human-scene interaction, aiming to identify the next smart action by leveraging powerful reinforcement learning. Our framework is optimized end-to-end by giving the indoor scenes with part-level furniture layout including part mobility information. Furthermore, our methodology is extensible and allows utilizing different reward designs to achieve personalized indoor scene synthesis. Extensive experiments demonstrate that our approach optimizes the layout of 3D indoor scenes in a human-aware manner, which is more realistic and plausible than original state-of-the-art generator results, and our approach produces superior smart actions, outperforming alternative baselines.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Attention-Aware Face Hallucination via Deep Reinforcement Learning
    Cao, Qingxing
    Lin, Liang
    Shi, Yukai
    Liang, Xiaodan
    Li, Guanbin
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1656 - 1664
  • [22] Learning to Navigate in Human Environments via Deep Reinforcement Learning
    Gao, Xingyuan
    Sun, Shiying
    Zhao, Xiaoguang
    Tan, Min
    NEURAL INFORMATION PROCESSING (ICONIP 2019), PT I, 2019, 11953 : 418 - 429
  • [23] CCTV-Informed Human-Aware Robot Navigation in Crowded Indoor Environments
    Kim, Mincheul
    Kwon, Youngsun
    Lee, Sebin
    Yoon, Sung-eui
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2024, 9 (06): : 5767 - 5774
  • [24] Market Making Strategy Optimization via Deep Reinforcement Learning
    Sun, Tianyuan
    Huang, Dechun
    Yu, Jie
    IEEE ACCESS, 2022, 10 : 9085 - 9093
  • [25] Dynamical Hyperparameter Optimization via Deep Reinforcement Learning in Tracking
    Dong, Xingping
    Shen, Jianbing
    Wang, Wenguan
    Shao, Ling
    Ling, Haibin
    Porikli, Fatih
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (05) : 1515 - 1529
  • [26] Market Making Strategy Optimization via Deep Reinforcement Learning
    Sun, Tianyuan
    Huang, Dechun
    Yu, Jie
    IEEE Access, 2022, 10 : 9085 - 9093
  • [27] Author Correction: Optimization of Molecules via Deep Reinforcement Learning
    Zhenpeng Zhou
    Steven Kearnes
    Li Li
    Richard N. Zare
    Patrick Riley
    Scientific Reports, 10
  • [28] Optimization of URLLC and eMBB Multiplexing via Deep Reinforcement Learning
    Li, Yang
    Hu, Chunjing
    Wang, Jun
    Xu, Mingfeng
    2019 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS IN CHINA (ICCC WORKSHOPS), 2019, : 245 - 250
  • [29] Static Neural Compiler Optimization via Deep Reinforcement Learning
    Mammadli, Rahim
    Jannesari, Ali
    Wolf, Felix
    PROCEEDINGS OF SIXTH WORKSHOP ON THE LLVM COMPILER INFRASTRUCTURE IN HPC AND WORKSHOP ON HIERARCHICAL PARALLELISM FOR EXASCALE COMPUTING (LLVM-HPC2020 AND HIPAR 2020), 2020, : 1 - 11
  • [30] Learning Human-Aware Path Planning with Fully Convolutional Networks
    Perez-Higueras, Noe
    Caballero, Fernando
    Merino, Luis
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 5897 - 5902