Haisor: Human-aware Indoor Scene Optimization via Deep Reinforcement Learning

被引:1
作者
Sun, Jia-Mu [1 ]
Yang, Jie [1 ]
Mo, Kaichun [2 ]
Lai, Yu-Kun [3 ]
Guibas, Leonidas [2 ]
Gao, Lin [1 ]
机构
[1] Chinese Acad Sci, Beijing Key Lab Mobile Comp & Pervas Device, Inst Comp Technol, Beijing 100190, Peoples R China
[2] Stanford Univ, Dept Comp Sci, 450 Serra Mall, Stanford, CA 94305 USA
[3] Cardiff Univ, Sch Comp Sci & Informat, Cardiff CF10 3AT, Wales
来源
ACM TRANSACTIONS ON GRAPHICS | 2024年 / 43卷 / 02期
基金
中国国家自然科学基金;
关键词
Scene optimization; scene synthesis; human aware; reinforcement learning; Monte Carlo search; robot simulation; imitation learning; REARRANGEMENT;
D O I
10.1145/3632947
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
3D scene synthesis facilitates and benefits many real-world applications. Most scene generators focus on making indoor scenes plausible via learning from training data and leveraging extra constraints such as adjacency and symmetry. Although the generated 3D scenes are mostly plausible with visually realistic layouts, they can be functionally unsuitable for human users to navigate and interact with furniture. Our key observation is that human activity plays a critical role and sufficient free space is essential for human-scene interactions. This is exactly where many existing synthesized scenes fail-the seemingly correct layouts are often not fit for living. To tackle this, we present a human-aware optimization framework Haisor for 3D indoor scene arrangement via reinforcement learning, which aims to find an action sequence to optimize the indoor scene layout automatically. Based on the hierarchical scene graph representation, an optimal action sequence is predicted and performed via Deep Q-Learning with Monte Carlo Tree Search (MCTS), where MCTS is our key feature to search for the optimal solution in long-term sequences and large action space. Multiple human-aware rewards are designed as our core criteria of human-scene interaction, aiming to identify the next smart action by leveraging powerful reinforcement learning. Our framework is optimized end-to-end by giving the indoor scenes with part-level furniture layout including part mobility information. Furthermore, our methodology is extensible and allows utilizing different reward designs to achieve personalized indoor scene synthesis. Extensive experiments demonstrate that our approach optimizes the layout of 3D indoor scenes in a human-aware manner, which is more realistic and plausible than original state-of-the-art generator results, and our approach produces superior smart actions, outperforming alternative baselines.
引用
收藏
页数:17
相关论文
共 50 条
  • [41] Deep reinforcement learning with credit assignment for combinatorial optimization
    Yan, Dong
    Weng, Jiayi
    Huang, Shiyu
    Li, Chongxuan
    Zhou, Yichi
    Su, Hang
    Zhu, Jun
    PATTERN RECOGNITION, 2022, 124
  • [42] Sequential Banner Design Optimization with Deep Reinforcement Learning
    Kondo, Yusuke
    Wang, Xueting
    Seshime, Hiroyuki
    Yamasaki, Toshihiko
    23RD IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM 2021), 2021, : 253 - 256
  • [43] An Antenna Optimization Framework Based on Deep Reinforcement Learning
    Peng, Fengling
    Chen, Xing
    IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, 2024, 72 (10) : 7594 - 7605
  • [44] Optimization of global production scheduling with deep reinforcement learning
    Waschneck, Bernd
    Reichstaller, Andre
    Belzner, Lenz
    Altenmueller, Thomas
    Bauernhansl, Thomas
    Knapp, Alexander
    Kyek, Andreas
    51ST CIRP CONFERENCE ON MANUFACTURING SYSTEMS, 2018, 72 : 1264 - 1269
  • [45] Deep reinforcement learning for heat exchanger shape optimization
    Keramati, Hadi
    Hamdullahpur, Feridun
    Barzegari, Mojtaba
    INTERNATIONAL JOURNAL OF HEAT AND MASS TRANSFER, 2022, 194
  • [46] Structural Optimization of a One-Dimensional Freeform Metagrating Deflector via Deep Reinforcement Learning
    Seo, Dongjin
    Nam, Daniel Wontae
    Park, Juho
    Park, Chan Y.
    Jang, Min Seok
    ACS PHOTONICS, 2022, 9 (02) : 452 - 458
  • [47] Constrained Deep Reinforcement Learning for Fronthaul Compression Optimization
    Gronland, Axel
    Russo, Alessio
    Jedra, Yassir
    Klaiqi, Bleron
    Gelabert, Xavier
    2024 IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING FOR COMMUNICATION AND NETWORKING, ICMLCN 2024, 2024, : 498 - 504
  • [48] Fragmentation-Aware VNF Placement: A Deep Reinforcement Learning Approach
    Mohamed, Ramy
    Avgeris, Marios
    Leivadeas, Aris
    Lambadaris, Ioannis
    ICC 2024 - IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2024, : 5257 - 5262
  • [49] Spatial-Aware Deep Reinforcement Learning for the Traveling Officer Problem
    Strauss, Niklas
    Schubert, Matthias
    PROCEEDINGS OF THE 2024 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2024, : 869 - 877
  • [50] Predictive Energy-Aware Adaptive Sampling with Deep Reinforcement Learning
    Heo, Seonyeong
    Mayer, Philipp
    Magno, Michele
    2022 29TH IEEE INTERNATIONAL CONFERENCE ON ELECTRONICS, CIRCUITS AND SYSTEMS (IEEE ICECS 2022), 2022,