Haisor: Human-aware Indoor Scene Optimization via Deep Reinforcement Learning

被引:1
|
作者
Sun, Jia-Mu [1 ]
Yang, Jie [1 ]
Mo, Kaichun [2 ]
Lai, Yu-Kun [3 ]
Guibas, Leonidas [2 ]
Gao, Lin [1 ]
机构
[1] Chinese Acad Sci, Beijing Key Lab Mobile Comp & Pervas Device, Inst Comp Technol, Beijing 100190, Peoples R China
[2] Stanford Univ, Dept Comp Sci, 450 Serra Mall, Stanford, CA 94305 USA
[3] Cardiff Univ, Sch Comp Sci & Informat, Cardiff CF10 3AT, Wales
来源
ACM TRANSACTIONS ON GRAPHICS | 2024年 / 43卷 / 02期
基金
中国国家自然科学基金;
关键词
Scene optimization; scene synthesis; human aware; reinforcement learning; Monte Carlo search; robot simulation; imitation learning; REARRANGEMENT;
D O I
10.1145/3632947
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
3D scene synthesis facilitates and benefits many real-world applications. Most scene generators focus on making indoor scenes plausible via learning from training data and leveraging extra constraints such as adjacency and symmetry. Although the generated 3D scenes are mostly plausible with visually realistic layouts, they can be functionally unsuitable for human users to navigate and interact with furniture. Our key observation is that human activity plays a critical role and sufficient free space is essential for human-scene interactions. This is exactly where many existing synthesized scenes fail-the seemingly correct layouts are often not fit for living. To tackle this, we present a human-aware optimization framework Haisor for 3D indoor scene arrangement via reinforcement learning, which aims to find an action sequence to optimize the indoor scene layout automatically. Based on the hierarchical scene graph representation, an optimal action sequence is predicted and performed via Deep Q-Learning with Monte Carlo Tree Search (MCTS), where MCTS is our key feature to search for the optimal solution in long-term sequences and large action space. Multiple human-aware rewards are designed as our core criteria of human-scene interaction, aiming to identify the next smart action by leveraging powerful reinforcement learning. Our framework is optimized end-to-end by giving the indoor scenes with part-level furniture layout including part mobility information. Furthermore, our methodology is extensible and allows utilizing different reward designs to achieve personalized indoor scene synthesis. Extensive experiments demonstrate that our approach optimizes the layout of 3D indoor scenes in a human-aware manner, which is more realistic and plausible than original state-of-the-art generator results, and our approach produces superior smart actions, outperforming alternative baselines.
引用
收藏
页数:17
相关论文
共 50 条
  • [11] Optimization of Molecules via Deep Reinforcement Learning
    Zhenpeng Zhou
    Steven Kearnes
    Li Li
    Richard N. Zare
    Patrick Riley
    Scientific Reports, 9
  • [12] Optimization of Molecules via Deep Reinforcement Learning
    Zhou, Zhenpeng
    Kearnes, Steven
    Li, Li
    Zare, Richard N.
    Riley, Patrick
    SCIENTIFIC REPORTS, 2019, 9 (1)
  • [13] Human-Aware Waypoint Planner for Mobile Robot in Indoor Environments
    Yang, Sungwoo
    Kang, Sumin
    Kim, Myunghyun
    Kim, Donghan
    2022 SIXTH IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING, IRC, 2022, : 287 - 291
  • [14] Semantic Scene Segmentation for Indoor Robot Navigation via Deep Learning
    Yeboah, Yao
    Cai Yanguang
    Wei Wu
    Farisi, Zeyad
    PROCEEDINGS OF ICRCA 2018: 2018 THE 3RD INTERNATIONAL CONFERENCE ON ROBOTICS, CONTROL AND AUTOMATION / ICRMV 2018: 2018 THE 3RD INTERNATIONAL CONFERENCE ON ROBOTICS AND MACHINE VISION, 2018, : 112 - 118
  • [15] Proactive Caching in Auto Driving Scene via Deep Reinforcement Learning
    Zhu, Zihui
    Zhang, Zhengming
    Yan, Wen
    Huang, Yongming
    Yang, Luxi
    2019 11TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP), 2019,
  • [16] Energy optimization associated with thermal comfort and indoor air control via a deep reinforcement learning algorithm
    Valladares, William
    Galindo, Marco
    Gutierrez, Jorge
    Wu, Wu-Chieh
    Liao, Kuo-Kai
    Liao, Jen-Chung
    Lu, Kuang-Chin
    Wang, Chi-Chuan
    BUILDING AND ENVIRONMENT, 2019, 155 : 105 - 117
  • [17] Bin Packing Optimization via Deep Reinforcement Learning
    Wang, Baoying
    Lin, Zhaohui
    Kong, Weijie
    Dong, Huixu
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (03): : 2542 - 2549
  • [18] Network Topology Optimization via Deep Reinforcement Learning
    Li, Zhuoran
    Wang, Xing
    Pan, Ling
    Zhu, Lin
    Wang, Zhendong
    Feng, Junlan
    Deng, Chao
    Huang, Longbo
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2023, 71 (05) : 2847 - 2859
  • [19] Indoor Navigation with Deep Reinforcement Learning
    Bakale, Vijayalakshmi A.
    Kumar, Yeshwanth V. S.
    Roodagi, Vivekanand C.
    Kulkarni, Yashaswini N.
    Patil, Mahesh S.
    Chickerur, Satyadhyan
    PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON INVENTIVE COMPUTATION TECHNOLOGIES (ICICT-2020), 2020, : 660 - 665
  • [20] Power-Aware Traffic Engineering via Deep Reinforcement Learning
    Pan, Tian
    Peng, Xiaoyu
    Bian, Zizheng
    Lin, Xingchen
    Song, Enge
    Huang, Tao
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM 2019 WKSHPS), 2019, : 1009 - 1010