Fusion of 2d and 3d sensor data for articulated body tracking

被引:27
|
作者
Knoop, Steffen [1 ]
Vacek, Stefan [1 ]
Dillmann, Ruediger [1 ]
机构
[1] Univ Karlsruhe TH, Inst Comp Sci & Engn CSE, Karlsruhe, Germany
关键词
Human motion capture; Sensor fusion; Time-of-flight; 3D body model; Human robot interaction;
D O I
10.1016/j.robot.2008.10.017
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this article, we present an approach for the fusion of 2d and 3d measurements for model-based person tracking, also known as Human Motion Capture. The applied body model is defined geometrically with generalized cylinders, and is set up hierarchically with connecting joints of different types. The joint model can be parameterized to control the degrees of freedom, adhesion and stiffness. This results in an articulated body model with constrained kinematic degrees of freedom. The fusion approach incorporates this model knowledge together with the measurements, and tracks the target body iteratively with an extended Iterative Closest Point (ICP) approach. Generally, the ICP is based on the concept of correspondences between measurements and model, which is normally exploited to incorporate 3d point cloud measurements. The concept has been generalized to represent and incorporate also 2d image space features. Together with the 3D point cloud from a 3d time-cif-flight (ToF) camera. arbitrary features, derived from 2D camera images, are used in the fusion algorithm for tracking of the body. This gives complementary information about the tracked body, enabling not only tracking of depth motions but also turning movements of the human body, which is normally a hard problem for markerless human motion capture systems. The resulting tracking system, named VooDoo is used to track humans in a Human-Robot Interaction (HRI) context. We only rely on sensors on board the robot, i.e. the color camera, the ToF camera and a laser range finder. The system runs in realtime (similar to 20 Hz) and is able to robustly track a human in the vicinity of the robot. (C) 2008 Elsevier B.V. All rights reserved.
引用
收藏
页码:321 / 329
页数:9
相关论文
共 50 条
  • [21] Matching 2D and 3D articulated shapes using the eccentricity transform
    Ion, Adrian
    Artner, Nicole M.
    Peyre, Gabriel
    Kropatsch, Walter G.
    Cohen, Laurent D.
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2011, 115 (06) : 817 - 834
  • [22] Particles coupled with Data Fusion for 3D Tracking
    Chen, Huiying
    Li, Youfu
    2008 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY, VOLS 1-5, 2008, : 749 - 754
  • [23] Implementation of 1D/2D/3D sensor data fusion in CASE_ATTI test-bed
    Benaskeur, AR
    Triki, Z
    SIGNAL PROCESSING, SENSOR FUSION, AND TARGET RECOGNITION XII, 2003, 5096 : 432 - 443
  • [24] Assessment of 3D left ventricular function and 3D speckle tracking echocardiography: comparison to 2D planimetry and 2D speckle tracking
    Brecht, A.
    Theres, L.
    Dreger, H.
    Spethmann, S.
    Baumann, G.
    Knebel, F.
    EUROPEAN HEART JOURNAL, 2014, 35 : 287 - 287
  • [25] Fusion of 2D and 3D data in three-dimensional face recognition
    Bronstein, AM
    Bronstein, MM
    Gordon, E
    Kimmel, R
    ICIP: 2004 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1- 5, 2004, : 87 - 90
  • [26] MRI - Mammography 2D/3D data fusion for breast pathology assessment
    Behrenbruch, CP
    Marias, K
    Armitage, PA
    Yam, M
    Moore, N
    English, RE
    Brady, JM
    MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION - MICCAI 2000, 2000, 1935 : 307 - 316
  • [27] Farm3D: Learning Articulated 3D Animals by Distilling 2D Diffusion
    Jakab, Tomas
    Li, Ruining
    Wu, Shangzhe
    Rupprecht, Christian
    Vedaldi, Andrea
    2024 INTERNATIONAL CONFERENCE IN 3D VISION, 3DV 2024, 2024, : 852 - 861
  • [28] An Automatic Multimodal Data Registration Strategy for 2D/3D Information Fusion
    Schierl, Jonathan
    Asari, Vijayan
    Singer, Nina
    Aspiras, Theus
    Stokes, Andrew
    Keaffaber, Brett
    Van Rynbach, Andre
    Decker, Kevin
    Rabb, David
    MULTIMODAL IMAGE EXPLOITATION AND LEARNING 2022, 2022, 12100
  • [29] Comparison of 2D and 3D displays and sensor fusion for threat detection, surveillance, and telepresence
    Meitzler, T
    Bednarz, D
    Lane, K
    Sohn, EJ
    Bryk, D
    Bankowski, E
    Jozwiak, R
    Andrews, R
    SENSORS, AND COMMAND, CONTROL, COMMUNICATIONS, AND INTELLIGENCE (C3I) TECHNOLOGIES FOR HOMELAND DEFENSE AND LAW ENFORCEMENT II, 2003, 5071 : 536 - 542
  • [30] 2D articulated tracking with dynamic Bayesian networks
    Shen, CH
    van den Hengel, A
    Dick, A
    Brooks, MJ
    FOURTH INTERNATIONAL CONFERENCE ON COMPUTER AND INFORMATION TECHNOLOGY, PROCEEDINGS, 2004, : 130 - 136