CatTrack: Single-Stage Category-Level 6D Object Pose Tracking via Convolution and Vision Transformer

被引:2
作者
Yu, Sheng [1 ]
Zhai, Di-Hua [1 ,2 ]
Xia, Yuanqing [1 ]
Li, Dong [3 ]
Zhao, Shiqi [3 ]
机构
[1] Beijing Inst Technol, Sch Automat, Beijing 100081, Peoples R China
[2] Beijing Inst Technol, Yangtze Delta Reg Acad, Jiaxing 314001, Peoples R China
[3] China Unicom Res Inst, Beijing 102676, Peoples R China
基金
中国国家自然科学基金;
关键词
pose tracking; transformer; Pose estimation; SPACE;
D O I
10.1109/TMM.2023.3284598
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the current research, many researchers have focused on instance-level pose tracking, which requires a 3D model of the object in advance, making it challenging to apply in practice. To address this limitation, some researchers have proposed the category-level object pose tracking method. Achieving accurate and speedy monocular category-level pose tracking is an essential research goal. In this article, we propose CatTrack, a new single-stage keypoints-based monocular category-level multi-object pose tracking network. A significant issue in object pose tracking tasks is utilizing the information from the previous frame to guide pose estimation for the next frame. However, as the object poses and camera information in each frame are different, we need to remove irrelevant information and emphasize useful features. To this end, we propose a transformer-based temporal information capture module to leverage the position information of keypoints from the previous frame. Furthermore, we propose a new keypoint matching module to enable the grouping and matching of object keypoints in complex scenes. We have successfully applied CatTrack to the Objectron dataset and achieved superior results in comparison to existing methods. Furthermore, we have also evaluated the generalization of CatTrack and successfully applied it to track the 6D pose of unseen real-world objects.
引用
收藏
页码:1665 / 1680
页数:16
相关论文
共 86 条
[31]  
Issac J, 2016, IEEE INT CONF ROBOT, P608, DOI 10.1109/ICRA.2016.7487184
[32]   SSD-6D: Making RGB-Based 3D Detection and 6D Pose Estimation Great Again [J].
Kehl, Wadim ;
Manhardt, Fabian ;
Tombari, Federico ;
Ilic, Slobodan ;
Navab, Nassir .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :1530-1538
[33]   PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization [J].
Kendall, Alex ;
Grimes, Matthew ;
Cipolla, Roberto .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2938-2946
[34]   A generalized approach for inconsistency detection in data fusion from multiple sensors [J].
Kumar, Manish ;
Garg, Devendra P. ;
Zachery, Randy A. .
2006 AMERICAN CONTROL CONFERENCE, VOLS 1-12, 2006, 1-12 :2078-2083
[35]   Category-Level Metric Scale Object Shape and Pose Estimation [J].
Lee, Taeyeop ;
Lee, Byeong-Uk ;
Kim, Myungchul ;
Kweon, I. S. .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (04) :8575-8582
[36]   EPnP: An Accurate O(n) Solution to the PnP Problem [J].
Lepetit, Vincent ;
Moreno-Noguer, Francesc ;
Fua, Pascal .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2009, 81 (02) :155-166
[37]   Robustly Aligning a Shape Model and Its Application to Car Alignment of Unknown Pose [J].
Li, Yan ;
Gu, Leon ;
Kanade, Takeo .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (09) :1860-1876
[38]   DeepIM: Deep Iterative Matching for 6D Pose Estimation [J].
Li, Yi ;
Wang, Gu ;
Ji, Xiangyang ;
Xiang, Yu ;
Fox, Dieter .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2020, 128 (03) :657-678
[39]   CDPN: Coordinates-Based Disentangled Pose Network for Real-Time RGB-Based 6-DoF Object Pose Estimation [J].
Li, Zhigang ;
Wang, Gu ;
Ji, Xiangyang .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :7677-7686
[40]   DualPoseNet: Category-level 6D Object Pose and Size Estimation Using Dual Pose Network with Refined Learning of Pose Consistency [J].
Lin, Jiehong ;
Wei, Zewei ;
Li, Zhihao ;
Xu, Songcen ;
Jia, Kui ;
Li, Yuanqing .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :3540-3549