Part-based tracking for object pose estimation

被引:0
作者
Ye, Shuang [1 ,2 ,3 ]
Ye, Jianhong [1 ]
Lei, Qing [1 ,2 ,3 ]
机构
[1] Huaqiao Univ, Sch Comp Sci & Technol, Xiamen, Peoples R China
[2] Huaqiao Univ, Xiamen Key Lab Comp Vis & Pattern Recognit, Xiamen 361000, Peoples R China
[3] Huaqiao Univ, Fujian Prov Univ, Key Lab Comp Vis & Machine Learning, Xiamen, Peoples R China
关键词
Object pose estimation; Part-based tracking; Local features matching; Detection optimization; Object tracking; Frame-by-frame tracking;
D O I
10.1007/s11554-023-01351-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Object pose estimation is crucial in human-computer interaction systems. The traditional point-based detection approaches rely on the robustness of feature points, the tracking methods utilize the similarity between frames to improve the speed, while the recent studies based on neural networks concentrate on solving specific invariance problems. Different from these methods, PTPE (Part-based Tracking for Pose Estimation) proposed in this paper focuses on how to balance the speed and accuracy under different conditions. In this method, the point matching is transformed into the part matching inside an object to enhance the reliability of the features. Additionally, a fast interframe tracking method is combined with learning models and structural information to enhance robustness. During tracking, multiple strategies are adopted for the different parts according to the matching effects evaluated by the learning models, so as to develop the locality and avoid the time consumption caused by undifferentiated full frame detection or learning. In addition, the constraints between parts are applied for parts detection optimization. Experiments show that PTPE is efficient both in accuracy and speed, especially in complex environments, when compared with classical algorithms that focus only on detection, interframe tracking, self-supervised models, and graph matching.
引用
收藏
页数:18
相关论文
共 27 条
[1]   Fast Explicit Diffusion for Accelerated Features in Nonlinear Scale Spaces [J].
Alcantarilla, Pablo F. ;
Nuevo, Jesus ;
Bartoli, Adrien .
PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2013, 2013,
[2]   Illumination invariant optical flow using neighborhood descriptors [J].
Ali, Sharib ;
Daul, Christian ;
Galbrun, Ernest ;
Blondel, Walter .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2016, 145 :95-110
[3]   Edge-based markerless 3D tracking of rigid objects [J].
Barandiaran, Javier ;
Borro, Diego .
17TH INTERNATIONAL CONFERENCE ON ARTIFICIAL REALITY AND TELEXISTENCE, ICAT 2007, PROCEEDINGS, 2007, :282-283
[4]   Speeded-Up Robust Features (SURF) [J].
Bay, Herbert ;
Ess, Andreas ;
Tuytelaars, Tinne ;
Van Gool, Luc .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2008, 110 (03) :346-359
[5]   SilhoNet: An RGB Method for 6D Object Pose Estimation [J].
Billings, Gideon ;
Johnson-Roberson, Matthew .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2019, 4 (04) :3727-3734
[6]  
Bouguet J.-Y., 2001, INTEL CORPORATION, V5, DOI DOI 10.1109/ICETET.2009.154
[7]   Robust 3D Object Tracking from Monocular Images Using Stable Parts [J].
Crivellaro, Alberto ;
Rad, Mahdi ;
Verdie, Yannick ;
Yi, Kwang Moo ;
Fua, Pascal ;
Lepetit, Vincent .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (06) :1465-1479
[8]   SuperPoint: Self-Supervised Interest Point Detection and Description [J].
DeTone, Daniel ;
Malisiewicz, Tomasz ;
Rabinovich, Andrew .
PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, :337-349
[9]  
Dusmanu M, 2019, Arxiv, DOI [arXiv:1905.03561, DOI 10.48550/ARXIV.1905.03561]
[10]   DETERMINING OPTICAL-FLOW [J].
HORN, BKP ;
SCHUNCK, BG .
ARTIFICIAL INTELLIGENCE, 1981, 17 (1-3) :185-203