Haisor: Human-aware Indoor Scene Optimization via Deep Reinforcement Learning

被引:1
|
作者
Sun, Jia-Mu [1 ]
Yang, Jie [1 ]
Mo, Kaichun [2 ]
Lai, Yu-Kun [3 ]
Guibas, Leonidas [2 ]
Gao, Lin [1 ]
机构
[1] Chinese Acad Sci, Beijing Key Lab Mobile Comp & Pervas Device, Inst Comp Technol, Beijing 100190, Peoples R China
[2] Stanford Univ, Dept Comp Sci, 450 Serra Mall, Stanford, CA 94305 USA
[3] Cardiff Univ, Sch Comp Sci & Informat, Cardiff CF10 3AT, Wales
来源
ACM TRANSACTIONS ON GRAPHICS | 2024年 / 43卷 / 02期
基金
中国国家自然科学基金;
关键词
Scene optimization; scene synthesis; human aware; reinforcement learning; Monte Carlo search; robot simulation; imitation learning; REARRANGEMENT;
D O I
10.1145/3632947
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
3D scene synthesis facilitates and benefits many real-world applications. Most scene generators focus on making indoor scenes plausible via learning from training data and leveraging extra constraints such as adjacency and symmetry. Although the generated 3D scenes are mostly plausible with visually realistic layouts, they can be functionally unsuitable for human users to navigate and interact with furniture. Our key observation is that human activity plays a critical role and sufficient free space is essential for human-scene interactions. This is exactly where many existing synthesized scenes fail-the seemingly correct layouts are often not fit for living. To tackle this, we present a human-aware optimization framework Haisor for 3D indoor scene arrangement via reinforcement learning, which aims to find an action sequence to optimize the indoor scene layout automatically. Based on the hierarchical scene graph representation, an optimal action sequence is predicted and performed via Deep Q-Learning with Monte Carlo Tree Search (MCTS), where MCTS is our key feature to search for the optimal solution in long-term sequences and large action space. Multiple human-aware rewards are designed as our core criteria of human-scene interaction, aiming to identify the next smart action by leveraging powerful reinforcement learning. Our framework is optimized end-to-end by giving the indoor scenes with part-level furniture layout including part mobility information. Furthermore, our methodology is extensible and allows utilizing different reward designs to achieve personalized indoor scene synthesis. Extensive experiments demonstrate that our approach optimizes the layout of 3D indoor scenes in a human-aware manner, which is more realistic and plausible than original state-of-the-art generator results, and our approach produces superior smart actions, outperforming alternative baselines.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Human-Aware Reinforcement Learning for Adaptive Human Robot Teaming
    Singh, Saurav
    Heard, Jamison
    PROCEEDINGS OF THE 2022 17TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI '22), 2022, : 1049 - 1052
  • [2] Human-Aware Robot Navigation via Reinforcement Learning with Hindsight Experience Replay and Curriculum Learning
    Li, Keyu
    Lu, Ye
    Meng, Max Q. -H.
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS (IEEE-ROBIO 2021), 2021, : 346 - 351
  • [3] Route Optimization via Environment-Aware Deep Network and Reinforcement Learning
    Guo, Pengzhan
    Xiao, Keli
    Ye, Zeyang
    Zhu, Wei
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2021, 12 (06)
  • [4] Bin Packing Optimization via Deep Reinforcement Learning
    Wang, Baoying
    Lin, Zhaohui
    Kong, Weijie
    Dong, Huixu
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2025, 10 (03): : 2542 - 2549
  • [5] Indoor Navigation with Deep Reinforcement Learning
    Bakale, Vijayalakshmi A.
    Kumar, Yeshwanth V. S.
    Roodagi, Vivekanand C.
    Kulkarni, Yashaswini N.
    Patil, Mahesh S.
    Chickerur, Satyadhyan
    PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON INVENTIVE COMPUTATION TECHNOLOGIES (ICICT-2020), 2020, : 660 - 665
  • [6] Learning to Navigate in Human Environments via Deep Reinforcement Learning
    Gao, Xingyuan
    Sun, Shiying
    Zhao, Xiaoguang
    Tan, Min
    NEURAL INFORMATION PROCESSING (ICONIP 2019), PT I, 2019, 11953 : 418 - 429
  • [7] Market Making Strategy Optimization via Deep Reinforcement Learning
    Sun, Tianyuan
    Huang, Dechun
    Yu, Jie
    IEEE ACCESS, 2022, 10 : 9085 - 9093
  • [8] Dynamical Hyperparameter Optimization via Deep Reinforcement Learning in Tracking
    Dong, Xingping
    Shen, Jianbing
    Wang, Wenguan
    Shao, Ling
    Ling, Haibin
    Porikli, Fatih
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (05) : 1515 - 1529
  • [9] Static Neural Compiler Optimization via Deep Reinforcement Learning
    Mammadli, Rahim
    Jannesari, Ali
    Wolf, Felix
    PROCEEDINGS OF SIXTH WORKSHOP ON THE LLVM COMPILER INFRASTRUCTURE IN HPC AND WORKSHOP ON HIERARCHICAL PARALLELISM FOR EXASCALE COMPUTING (LLVM-HPC2020 AND HIPAR 2020), 2020, : 1 - 11
  • [10] NetRL: Task-Aware Network Denoising via Deep Reinforcement Learning
    Xu, Jiarong
    Yang, Yang
    Pu, Shiliang
    Fu, Yao
    Feng, Jun
    Jiang, Weihao
    Lu, Jiangang
    Wang, Chunping
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (01) : 810 - 823