PQ-Transformer: Jointly Parsing 3D Objects and Layouts From Point Clouds

被引:23
作者
Chen, Xiaoxue [1 ]
Zhao, Hao [2 ]
Zhou, Guyue [1 ]
Zhang, Ya-Qin [1 ]
机构
[1] Tsinghua Univ, Inst AI Ind Res, Beijing 100190, Peoples R China
[2] Peking Univ, Intel Labs China, Beijing 100871, Peoples R China
关键词
Object detection; layout; point cloud; NETWORK;
D O I
10.1109/LRA.2022.3143224
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
3D scene understanding from point clouds plays a vital role for various robotic applications. Unfortunately, current state-of-the-art methods use separate neural networks for different tasks like object detection or room layout estimation. Such a scheme has two limitations: 1) Storing and running several networks for different tasks are expensive for typical robotic platforms. 2) The intrinsic structure of separate outputs are ignored and potentially violated. To this end, we propose the first transformer architecture that predicts 3D objects and layouts simultaneously, using point cloud inputs. Unlike existing methods that either estimate layout keypoints or edges, we directly parameterize room layout as a set of quads. As such, the proposed architecture is termed as P(oint)Q(uad)-Transformer. Along with the novel quad representation, we propose a tailored physical constraint loss function that discourages object-layout interference. The quantitative and qualitative evaluations on the public benchmark Scan Net show that the proposed PQ-Transformer succeeds to jointly parse 3D objects and layouts, running at a quasi-real-time (8.91 FPS) rate without efficiency-oriented optimization. Moreover, the new physical constraint lass can improve strong baselines, and the F1-score of the room layout is significantly promoted from 37.9% to 57.9%.(1)
引用
收藏
页码:2519 / 2526
页数:8
相关论文
共 37 条
[21]   Learning Informative Edge Maps for Indoor Scene Layout Prediction [J].
Mallya, Arun ;
Lazebnik, Svetlana .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :936-944
[22]   KinectFusion: Real-Time Dense Surface Mapping and Tracking [J].
Newcombe, Richard A. ;
Izadi, Shahram ;
Hilliges, Otmar ;
Molyneaux, David ;
Kim, David ;
Davison, Andrew J. ;
Kohli, Pushmeet ;
Shotton, Jamie ;
Hodges, Steve ;
Fitzgibbon, Andrew .
2011 10TH IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR), 2011, :127-136
[23]   Total3DUnderstanding: Joint Layout, Object Pose and Mesh Reconstruction for Indoor Scenes from a Single Image [J].
Nie, Yinyu ;
Han, Xiaoguang ;
Guo, Shihui ;
Zheng, Yujian ;
Chang, Jian ;
Zhang, Jian Jun .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :52-61
[24]   Deep Hough Voting for 3D Object Detection in Point Clouds [J].
Qi, Charles R. ;
Litany, Or ;
He, Kaiming ;
Guibas, Leonidas J. .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9276-9285
[25]  
Qin ZY, 2019, AAAI CONF ARTIF INTE, P8851
[26]   Box In the Box: Joint 3D Layout and Object Reasoning from Single Images [J].
Schwing, Alexander G. ;
Fidler, Sanja ;
Pollefeys, Marc ;
Urtasun, Raquel .
2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, :353-360
[27]   Semantic Scene Completion from a Single Depth Image [J].
Song, Shuran ;
Yu, Fisher ;
Zeng, Andy ;
Chang, Angel X. ;
Savva, Manolis ;
Funkhouser, Thomas .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :190-198
[28]   Deep Sliding Shapes for Amodal 3D Object Detection in RGB-D Images [J].
Song, Shuran ;
Xiao, Jianxiong .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :808-816
[29]  
Song SR, 2014, LECT NOTES COMPUT SC, V8694, P634, DOI 10.1007/978-3-319-10599-4_41
[30]  
Wu J, 2017, ADV NEUR IN, V30