Haisor: Human-aware Indoor Scene Optimization via Deep Reinforcement Learning

被引:1
|
作者
Sun, Jia-Mu [1 ]
Yang, Jie [1 ]
Mo, Kaichun [2 ]
Lai, Yu-Kun [3 ]
Guibas, Leonidas [2 ]
Gao, Lin [1 ]
机构
[1] Chinese Acad Sci, Beijing Key Lab Mobile Comp & Pervas Device, Inst Comp Technol, Beijing 100190, Peoples R China
[2] Stanford Univ, Dept Comp Sci, 450 Serra Mall, Stanford, CA 94305 USA
[3] Cardiff Univ, Sch Comp Sci & Informat, Cardiff CF10 3AT, Wales
来源
ACM TRANSACTIONS ON GRAPHICS | 2024年 / 43卷 / 02期
基金
中国国家自然科学基金;
关键词
Scene optimization; scene synthesis; human aware; reinforcement learning; Monte Carlo search; robot simulation; imitation learning; REARRANGEMENT;
D O I
10.1145/3632947
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
3D scene synthesis facilitates and benefits many real-world applications. Most scene generators focus on making indoor scenes plausible via learning from training data and leveraging extra constraints such as adjacency and symmetry. Although the generated 3D scenes are mostly plausible with visually realistic layouts, they can be functionally unsuitable for human users to navigate and interact with furniture. Our key observation is that human activity plays a critical role and sufficient free space is essential for human-scene interactions. This is exactly where many existing synthesized scenes fail-the seemingly correct layouts are often not fit for living. To tackle this, we present a human-aware optimization framework Haisor for 3D indoor scene arrangement via reinforcement learning, which aims to find an action sequence to optimize the indoor scene layout automatically. Based on the hierarchical scene graph representation, an optimal action sequence is predicted and performed via Deep Q-Learning with Monte Carlo Tree Search (MCTS), where MCTS is our key feature to search for the optimal solution in long-term sequences and large action space. Multiple human-aware rewards are designed as our core criteria of human-scene interaction, aiming to identify the next smart action by leveraging powerful reinforcement learning. Our framework is optimized end-to-end by giving the indoor scenes with part-level furniture layout including part mobility information. Furthermore, our methodology is extensible and allows utilizing different reward designs to achieve personalized indoor scene synthesis. Extensive experiments demonstrate that our approach optimizes the layout of 3D indoor scenes in a human-aware manner, which is more realistic and plausible than original state-of-the-art generator results, and our approach produces superior smart actions, outperforming alternative baselines.
引用
收藏
页数:17
相关论文
共 50 条
  • [11] EFFICIENT INDOOR LOCALIZATION VIA REINFORCEMENT LEARNING
    Milioris, Dimitris
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 8350 - 8354
  • [12] User preference-aware video highlight detection via deep reinforcement learning
    Han Wang
    Kexin Wang
    Yuqing Wu
    Zhongzhi Wang
    Ling Zou
    Multimedia Tools and Applications, 2020, 79 : 15015 - 15024
  • [13] Obstacle-Aware Navigation of Soft Growing Robots via Deep Reinforcement Learning
    El-Hussieny, Haitham
    Hameed, Ibrahim A.
    IEEE ACCESS, 2024, 12 : 38192 - 38201
  • [14] An Isolation-aware Online Virtual Network Embedding via Deep Reinforcement Learning
    Gohar, Ali
    Rong, Chunming
    Lee, Sanghwan
    2023 IEEE/ACM 23RD INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND INTERNET COMPUTING WORKSHOPS, CCGRIDW, 2023, : 89 - 95
  • [15] Salience-Aware Face Presentation Attack Detection via Deep Reinforcement Learning
    Yu, Bingyao
    Lu, Jiwen
    Li, Xiu
    Zhou, Jie
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 413 - 427
  • [16] Learning to Dress: Synthesizing Human Dressing Motion via Deep Reinforcement Learning
    Clegg, Alexander
    Yu, Wenhao
    Tan, Jie
    Liu, C. Karen
    Turk, Greg
    ACM TRANSACTIONS ON GRAPHICS, 2018, 37 (06):
  • [17] Learning to dress: Synthesizing human dressing motion via deep reinforcement learning
    Clegg A.
    Yu W.
    Tan J.
    Liu C.K.
    Turk G.
    2018, Association for Computing Machinery, 2 Penn Plaza, Suite 701, New York, NY 10121-0701, United States (37):
  • [18] User preference-aware video highlight detection via deep reinforcement learning
    Wang, Han
    Wang, Kexin
    Wu, Yuqing
    Wang, Zhongzhi
    Zou, Ling
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (21-22) : 15015 - 15024
  • [19] Learning to Dress: Synthesizing Human Dressing Motion via Deep Reinforcement Learning
    Clegg, Alexander
    Yu, Wenhao
    Tan, Jie
    Liu, C. Karen
    Turk, Greg
    SIGGRAPH ASIA'18: SIGGRAPH ASIA 2018 TECHNICAL PAPERS, 2018,
  • [20] Scene Mover: Automatic Move Planning for Scene Arrangement by Deep Reinforcement Learning
    Wang, Hanqing
    Liang, Wei
    Yu, Lap-Fai
    ACM TRANSACTIONS ON GRAPHICS, 2020, 39 (06):