Multi-sensor based real-time 6-DoF pose tracking for wearable augmented reality

被引:20
作者
Fang, Wei [1 ]
Zheng, Lianyu [1 ]
Wu, Xiangyong [2 ]
机构
[1] Beihang Univ, Sch Mech Engn & Automat, Xueyuan Rd 37, Beijing 100191, Peoples R China
[2] Tianjin Inst Surverying & Mapping, Changling Rd, Tianjin 300381, Peoples R China
关键词
Wearable augmented reality; Sensor-fusion; Markerless; Pose tracking; Scale estimation; ODOMETRY; VISION; SLAM; ORB;
D O I
10.1016/j.compind.2017.06.002
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Wearable augmented reality (WAR) combines a live view of a real scene with computer-generated graphic on resource-limited platforms. One of the crucial technologies for WAR is a real-time 6-DoF pose tracking, facilitating registration of virtual components within in a real scene. Generally, artificial markers are typically applied to provide pose tracking for WAR applications. However, these marker based methods suffer from marker occlusions or large viewpoint changes. Thus, a multi-sensor based tracking approach is applied in this paper, and it can perform real-time 6-DoF pose tracking with real-time scale estimation for WAR on a consumer smartphone. By combining a wide-angle monocular camera and an inertial sensor, a more robust 6-DoF motion tracking is demonstrated with the mutual compensations of the heterogeneous sensors. Moreover, with the help of the depth sensor, the scale initialization of the monocular tracking is addressed, where the initial scale is propagated within the subsequent sensor-fusion process, alleviating the scale drift in traditional monocular tracking approaches. In addition, a sliding-window based Kalman filter framework is used to provide a low jitter pose tracking for WAR. Finally, experiments are carried out to demonstrate the feasibility and robustness of the proposed tracking method for WAR applications. (C) 2017 Elsevier B.V. All rights reserved.
引用
收藏
页码:91 / 103
页数:13
相关论文
共 38 条
  • [1] AIR-MODELLING: A tool for gesture-based solid modelling in context during early design stages in AR environments
    Arroyave-Tobon, Santiago
    Osorio-Gomez, Gilberto
    Cardona-McCormick, Juan F.
    [J]. COMPUTERS IN INDUSTRY, 2015, 66 : 73 - 81
  • [2] Blasch E, 2012, ARTECH HSE INTEL INF, P1
  • [3] Kalman Filter for Robot Vision: A Survey
    Chen, S. Y.
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, 2012, 59 (11) : 4409 - 4420
  • [4] Endres F, 2012, IEEE INT CONF ROBOT, P1691, DOI 10.1109/ICRA.2012.6225199
  • [5] LSD-SLAM: Large-Scale Direct Monocular SLAM
    Engel, Jakob
    Schoeps, Thomas
    Cremers, Daniel
    [J]. COMPUTER VISION - ECCV 2014, PT II, 2014, 8690 : 834 - 849
  • [6] A model-based approach for data integration to improve maintenance management by mixed reality
    Espindola, Danubia Bueno
    Fumagalli, Luca
    Garetti, Marco
    Pereira, Carlos E.
    Botelho, Silvia S. C.
    Henriques, Renato Ventura
    [J]. COMPUTERS IN INDUSTRY, 2013, 64 (04) : 376 - 391
  • [7] Augmented reality on large screen for interactive maintenance instructions
    Fiorentino, Michele
    Uva, Antonio E.
    Gattullo, Michele
    Debernardis, Saverio
    Monno, Giuseppe
    [J]. COMPUTERS IN INDUSTRY, 2014, 65 (02) : 270 - 278
  • [8] Forster C, 2015, ROBOTICS: SCIENCE AND SYSTEMS XI
  • [9] Godha S., 2006, THESIS
  • [10] CAD-based 3D objects recognition in monocular images for mobile augmented reality
    Han, Pengfei
    Zhao, Gang
    [J]. COMPUTERS & GRAPHICS-UK, 2015, 50 : 36 - 46