Lite-HRPE: A 6DoF Object Pose Estimation Method for Resource-limited Platforms

被引:0
|
作者
Liu, Xin [1 ,2 ,3 ]
Guan, Qi [1 ,2 ,3 ]
Xue, Shibei [1 ,2 ,3 ]
Zhao, Dezong [4 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Automat, Shanghai 200240, Peoples R China
[2] Minist Educ China, Key Lab Syst Control & Informat Proc, Shanghai 200240, Peoples R China
[3] Shanghai Engn Res Ctr Intelligent Control & Manag, Shanghai 200240, Peoples R China
[4] Univ Glasgow, James Watt Sch Engn, Glasgow G12 8QQ, Lanark, Scotland
来源
2024 IEEE 18TH INTERNATIONAL CONFERENCE ON CONTROL & AUTOMATION, ICCA 2024 | 2024年
基金
中国国家自然科学基金;
关键词
NETWORK;
D O I
10.1109/ICCA62789.2024.10591899
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Accurately estimating the six-degree-of-freedom pose of objects is essential for intelligent robotics. Although significant progress has been made in this area, most studies fail to account for specific hardware limitations for model deployment, which remains a major challenge for resource-constrained scenarios. To address this issue, we propose Lite-HRPE, a lightweight RGB-based pose estimation method, which leverages a multi-branch parallel structure to extract spatial and semantic information of key points for pose estimation. Additionally, Lite-HRPE adopts the G-block and G-neck modular structure and streamlines the original feature extraction network to realize a compact network structure. This allows Lite-HRPE to strike a balance between pose estimation accuracy, parameter count, computational load, and runtime speed. Our evaluation on public datasets shows that Lite-HRPE achieves a 95.7% accuracy with only 10.8% of the number of parameters and 11.8% of the FLOPs compared to Hybridpose.
引用
收藏
页码:1006 / 1011
页数:6
相关论文
共 50 条
  • [1] Object aspect classification and 6DoF pose estimation
    Dede, Muhammet Ali
    Genc, Yakup
    IMAGE AND VISION COMPUTING, 2022, 124
  • [2] 6DoF Pose Estimation for Intricately-Shaped Object
    Jiao, Tonghui
    Xia, Yanzhao
    Gao, Xiaosong
    Chen, Yongyu
    Zhao, Qunfei
    2019 3RD INTERNATIONAL SYMPOSIUM ON AUTONOMOUS SYSTEMS (ISAS 2019), 2019, : 199 - 204
  • [3] Spatial feature mapping for 6DoF object pose estimation
    Mei, Jianhan
    Jiang, Xudong
    Ding, Henghui
    PATTERN RECOGNITION, 2022, 131
  • [4] 6DoF Pose Estimation with Object Cutout based on a Deep Autoencoder
    Liu, Xin
    Zhang, Jichao
    He, Xian
    Song, Xiuqiang
    Qin, Xueying
    ADJUNCT PROCEEDINGS OF THE 2019 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR-ADJUNCT 2019), 2019, : 360 - 365
  • [5] Exploring Multiple Geometric Representations for 6DoF Object Pose Estimation
    Yang, Xu
    Cai, Junqi
    Li, Kunbo
    Fan, Xiumin
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (10) : 6115 - 6122
  • [6] Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose Estimation
    Merrill, Nathaniel
    Guo, Yuliang
    Zuo, Xingxing
    Huang, Xinyu
    Leutenegger, Stefan
    Peng, Xi
    Ren, Liu
    Huang, Guoquan
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14881 - 14890
  • [7] ZebraPose: Coarse to Fine Surface Encoding for 6DoF Object Pose Estimation
    Su, Yongzhi
    Saleh, Mahdi
    Fetzer, Torben
    Rambach, Jason
    Navab, Nassir
    Busam, Benjamin
    Stricker, Didier
    Tombari, Federico
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 6728 - 6738
  • [8] "Recent Methods of 6DoF Pose Estimation"
    Akizuki S.
    Kyokai Joho Imeji Zasshi/Journal of the Institute of Image Information and Television Engineers, 2019, 73 (02): : 210 - 213
  • [9] A Survey of 6DoF Object Pose Estimation Methods for Different Application Scenarios
    Guan, Jian
    Hao, Yingming
    Wu, Qingxiao
    Li, Sicong
    Fang, Yingjian
    SENSORS, 2024, 24 (04)
  • [10] A Benchmark Dataset for 6DoF Object Pose Tracking
    Wu, Po-Chen
    Lee, Yueh-Ying
    Tseng, Hung-Yu
    Ho, Hsuan-I
    Yang, Ming-Hsuan
    Chien, Shao-Yi
    ADJUNCT PROCEEDINGS OF THE 2017 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR-ADJUNCT), 2017, : 186 - 191