Haisor: Human-aware Indoor Scene Optimization via Deep Reinforcement Learning

被引:1
|
作者
Sun, Jia-Mu [1 ]
Yang, Jie [1 ]
Mo, Kaichun [2 ]
Lai, Yu-Kun [3 ]
Guibas, Leonidas [2 ]
Gao, Lin [1 ]
机构
[1] Chinese Acad Sci, Beijing Key Lab Mobile Comp & Pervas Device, Inst Comp Technol, Beijing 100190, Peoples R China
[2] Stanford Univ, Dept Comp Sci, 450 Serra Mall, Stanford, CA 94305 USA
[3] Cardiff Univ, Sch Comp Sci & Informat, Cardiff CF10 3AT, Wales
来源
ACM TRANSACTIONS ON GRAPHICS | 2024年 / 43卷 / 02期
基金
中国国家自然科学基金;
关键词
Scene optimization; scene synthesis; human aware; reinforcement learning; Monte Carlo search; robot simulation; imitation learning; REARRANGEMENT;
D O I
10.1145/3632947
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
3D scene synthesis facilitates and benefits many real-world applications. Most scene generators focus on making indoor scenes plausible via learning from training data and leveraging extra constraints such as adjacency and symmetry. Although the generated 3D scenes are mostly plausible with visually realistic layouts, they can be functionally unsuitable for human users to navigate and interact with furniture. Our key observation is that human activity plays a critical role and sufficient free space is essential for human-scene interactions. This is exactly where many existing synthesized scenes fail-the seemingly correct layouts are often not fit for living. To tackle this, we present a human-aware optimization framework Haisor for 3D indoor scene arrangement via reinforcement learning, which aims to find an action sequence to optimize the indoor scene layout automatically. Based on the hierarchical scene graph representation, an optimal action sequence is predicted and performed via Deep Q-Learning with Monte Carlo Tree Search (MCTS), where MCTS is our key feature to search for the optimal solution in long-term sequences and large action space. Multiple human-aware rewards are designed as our core criteria of human-scene interaction, aiming to identify the next smart action by leveraging powerful reinforcement learning. Our framework is optimized end-to-end by giving the indoor scenes with part-level furniture layout including part mobility information. Furthermore, our methodology is extensible and allows utilizing different reward designs to achieve personalized indoor scene synthesis. Extensive experiments demonstrate that our approach optimizes the layout of 3D indoor scenes in a human-aware manner, which is more realistic and plausible than original state-of-the-art generator results, and our approach produces superior smart actions, outperforming alternative baselines.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Deep Reinforcement Learning for Multiobjective Optimization
    Li, Kaiwen
    Zhang, Tao
    Wang, Rui
    IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (06) : 3103 - 3114
  • [22] Reinforcement learning for deep portfolio optimization
    Yan, Ruyu
    Jin, Jiafei
    Han, Kun
    ELECTRONIC RESEARCH ARCHIVE, 2024, 32 (09): : 5176 - 5200
  • [23] Bayesian Deep Reinforcement Learning via Deep Kernel Learning
    Xuan, Junyu
    Lu, Jie
    Yan, Zheng
    Zhang, Guangquan
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2019, 12 (01) : 164 - 171
  • [24] Bayesian Deep Reinforcement Learning via Deep Kernel Learning
    Junyu Xuan
    Jie Lu
    Zheng Yan
    Guangquan Zhang
    International Journal of Computational Intelligence Systems, 2018, 12 : 164 - 171
  • [25] Joint Optimization Via Deep Reinforcement Learning in Wireless Networked Controlled Systems
    Ashraf, Kanwal
    Le Moullec, Yannick
    Pardy, Tamas
    Rang, Toomas
    IEEE ACCESS, 2022, 10 : 67152 - 67167
  • [26] Scene-adaptive radar tracking with deep reinforcement learning
    Stephan, Michael
    Servadei, Lorenzo
    Arjona-Medina, Jose
    Santra, Avik
    Wille, Robert
    Fischer, Georg
    MACHINE LEARNING WITH APPLICATIONS, 2022, 8
  • [27] Learning-Based Locomotion Controllers for Quadruped Robots in Indoor Stair Climbing via Deep Reinforcement Learning
    Sinsukudomchai, Tanawit
    Deelertpaiboon, Chirdpong
    2024 21ST INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING/ELECTRONICS, COMPUTER, TELECOMMUNICATIONS AND INFORMATION TECHNOLOGY, ECTI-CON 2024, 2024,
  • [28] Adversarial Constrained Bidding via Minimax Regret Optimization with Causality-Aware Reinforcement Learning
    Wang, Haozhe
    Du, Chao
    Pang, Panyan
    He, Li
    Wang, Liang
    Zheng, Bo
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 2314 - 2325
  • [29] Learning Battles in ViZDoom via Deep Reinforcement Learning
    Shao, Kun
    Zhao, Dongbin
    Li, Nannan
    Zhu, Yuanheng
    PROCEEDINGS OF THE 2018 IEEE CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND GAMES (CIG'18), 2018, : 389 - 392
  • [30] Deep sparse representation via deep dictionary learning for reinforcement learning
    Tang, Jianhao
    Li, Zhenni
    Xie, Shengli
    Ding, Shuxue
    Zheng, Shaolong
    Chen, Xueni
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 2398 - 2403