A State Space Model for Multiobject Full 3-D Information Estimation From RGB-D Images

被引:0
作者
Zhou, Jiaming [1 ]
Zhu, Qing [1 ]
Wang, Yaonan [1 ]
Feng, Mingtao [2 ]
Liu, Jian [1 ]
Huang, Jianan [1 ]
Mian, Ajmal [3 ]
机构
[1] Hunan Univ, Coll Elect & Informat Engn, Natl Engn Res Ctr Robot Visual Percept & Control, Changsha 410082, Peoples R China
[2] Xidian Univ, Sch Artificial Intelligence, Xian 710071, Peoples R China
[3] Univ Western Australia, Dept Comp Sci & Software Engn, Perth, WA 6009, Australia
基金
澳大利亚研究理事会; 中国国家自然科学基金;
关键词
Shape; Three-dimensional displays; Solid modeling; Computational modeling; Image reconstruction; Codes; Accuracy; Visualization; Point cloud compression; Head; Mamba; pose estimation; shape reconstruction; state space model (SSM);
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Visual understanding of 3-D objects is essential for robotic manipulation, autonomous navigation, and augmented reality. However, existing methods struggle to perform this task efficiently and accurately in an end-to-end manner. We propose a single-shot method based on the state space model (SSM) to predict the full 3-D information (pose, size, shape) of multiple 3-D objects from a single RGB-D image in an end-to-end manner. Our method first encodes long-range semantic information from RGB and depth images separately and then combines them into an integrated latent representation that is processed by a modified SSM to infer the full 3-D information in two separate task heads within a unified model. A heatmap/detection head predicts object centers, and a 3-D information head predicts a matrix detailing the pose, size and latent code of shape for each detected object. We also propose a shape autoencoder based on the SSM, which learns canonical shape codes derived from a large database of 3-D point cloud shapes. The end-to-end framework, modified SSM block and SSM-based shape autoencoder form major contributions of this work. Our design includes different scan strategies tailored to different input data representations, such as RGB-D images and point clouds. Extensive evaluations on the REAL275, CAMERA25, and Wild6D datasets show that our method achieves state-of-the-art performance. On the large-scale Wild6D dataset, our model significantly outperforms the nearest competitor, achieving 2.6% and 5.1% improvements on the IOU-50 and 5(degrees)10 cm metrics, respectively.
引用
收藏
页码:2248 / 2260
页数:13
相关论文
共 44 条
  • [11] Supporting Data-Driven Basketball Journalism through Interactive Visualization
    Fu, Yu
    Stasko, John
    [J]. PROCEEDINGS OF THE 2022 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI' 22), 2022,
  • [12] Gu A., 2021, Adv. Neural Inf. Process. Syst., V34, P572
  • [13] Gu A., 2023, arXiv
  • [14] Gu A., 2021, arXiv
  • [15] Gu Albert, 2020, Advances in Neural Information Processing Systems, V33
  • [16] DensePose: Dense Human Pose Estimation In The Wild
    Guler, Riza Alp
    Neverova, Natalia
    Kokkinos, Lasonas
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 7297 - 7306
  • [17] Guo H., 2024, ARXIV
  • [18] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [19] Hinterstoisser S, 2011, IEEE I CONF COMP VIS, P858, DOI 10.1109/ICCV.2011.6126326
  • [20] 3D Object Detection From Point Cloud via Voting Step Diffusion
    Hou, Haoran
    Feng, Mingtao
    Wu, Zijie
    Dong, Weisheng
    Zhu, Qing
    Wang, Yaonan
    Mian, Ajmal
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (12) : 12142 - 12157