Category Level Object Pose Estimation via Global High-Order Pooling

被引:1
作者
Jiang, Changhong [1 ]
Mu, Xiaoqiao [2 ]
Zhang, Bingbing [3 ]
Xie, Mujun [1 ]
Liang, Chao [4 ]
机构
[1] Changchun Univ Technol, Sch Elect & Elect Engn, Changchun 130012, Peoples R China
[2] Changchun Univ Technol, Sch Mech & Elect Engn, Changchun 130012, Peoples R China
[3] Dalian Minzu Univ, Sch Comp Sci & Engn, Dalian 116602, Peoples R China
[4] Changchun Univ Technol, Collage Comp Sci & Engn, Changchun 130012, Peoples R China
关键词
pose estimation; pooling; high-order;
D O I
10.3390/electronics13091720
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Category level 6D object pose estimation aims to predict the rotation, translation and size of object instances in any scene. In current research methods, global average pooling (first-order) is usually used to explore geometric features, which can only capture the first-order statistical information of the features and do not fully utilize the potential of the network. In this work, we propose a new high-order pose estimation network (HoPENet), which enhances feature representation by collecting high-order statistics to model high-order geometric features at each stage of the network. HoPENet introduces a global high-order enhancement module and utilizes global high-order pooling operations to capture the correlation between features and fuse global information. In addition, this module can capture long-term statistical correlations and make full use of contextual information. The entire network finally obtains a more discriminative feature representation. Experiments on two benchmarks, the virtual dataset CAMERA25 and the real dataset REAL275, demonstrate the effectiveness of HoPENet, achieving state-of-the-art (SOTA) pose estimation performance.
引用
收藏
页数:11
相关论文
共 32 条
[1]   SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation [J].
Chen, Kai ;
Dou, Qi .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :2753-2762
[2]  
Chen Wang, 2020, 2020 IEEE International Conference on Robotics and Automation (ICRA), P10059, DOI 10.1109/ICRA40945.2020.9196679
[3]   FS-Net: Fast Shape-based Network for Category-Level 6D Object Pose Estimation with Decoupled Rotation Mechanism [J].
Chen, Wei ;
Jia, Xi ;
Chang, Hyung Jin ;
Duan, Jinming ;
Shen, Linlin ;
Leonardis, Ales .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :1581-1590
[4]   Multi-View 3D Object Detection Network for Autonomous Driving [J].
Chen, Xiaozhi ;
Ma, Huimin ;
Wan, Ji ;
Li, Bo ;
Xia, Tian .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6526-6534
[5]   Category Level Object Pose Estimation via Neural Analysis-by-Synthesis [J].
Chen, Xu ;
Dong, Zijian ;
Song, Jie ;
Geiger, Andreas ;
Hilliges, Otmar .
COMPUTER VISION - ECCV 2020, PT XXVI, 2020, 12371 :139-156
[6]   The MOPED framework: Object recognition and pose estimation for manipulation [J].
Collet, Alvaro ;
Martinez, Manuel ;
Srinivasa, Siddhartha S. .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2011, 30 (10) :1284-1306
[7]   Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis [J].
Dai, Angela ;
Qi, Charles Ruizhongtai ;
Niessner, Matthias .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6545-6554
[8]   GPV-Pose: Category-level Object Pose Estimation via Geometry-guided Point-wise Voting [J].
Di, Yan ;
Zhang, Ruida ;
Lou, Zhiqiang ;
Manhardt, Fabian ;
Ji, Xiangyang ;
Navab, Nassir ;
Tombari, Federico .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :6771-6781
[9]   3Global Second-order Pooling Convolutional Networks [J].
Gao, Zilin ;
Xie, Jiangtao ;
Wang, Qilong ;
Li, Peihua .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3019-3028
[10]   Matrix Backpropagation for Deep Networks with Structured Layers [J].
Ionescu, Catalin ;
Vantzos, Orestis ;
Sminchisescu, Cristian .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2965-2973