Pose Estimation of a Mobile Robot Based on Fusion of IMU Data and Vision Data Using an Extended Kalman Filter

被引:126
作者
Alatise, Mary B. [1 ]
Hancke, Gerhard P. [1 ,2 ]
机构
[1] Univ Pretoria, Dept Elect Elect & Comp Engn, ZA-0028 Pretoria, South Africa
[2] City Univ Hong Kong, Dept Comp Sci, Hong Kong, Hong Kong, Peoples R China
关键词
pose estimation; mobile robot; inertial sensors; vision; object; extended Kalman filter; AUGMENTED REALITY; INERTIAL SENSORS; TRACKING; FEATURES; SCALE; CALIBRATION; CONSENSUS; STANDARD; OBJECTS;
D O I
10.3390/s17102164
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Using a single sensor to determine the pose estimation of a device cannot give accurate results. This paper presents a fusion of an inertial sensor of six degrees of freedom (6-DoF) which comprises the 3-axis of an accelerometer and the 3-axis of a gyroscope, and a vision to determine a low-cost and accurate position for an autonomous mobile robot. For vision, a monocular vision-based object detection algorithm speeded-up robust feature (SURF) and random sample consensus (RANSAC) algorithms were integrated and used to recognize a sample object in several images taken. As against the conventional method that depend on point-tracking, RANSAC uses an iterative method to estimate the parameters of a mathematical model from a set of captured data which contains outliers. With SURF and RANSAC, improved accuracy is certain; this is because of their ability to find interest points (features) under different viewing conditions using a Hessain matrix. This approach is proposed because of its simple implementation, low cost, and improved accuracy. With an extended Kalman filter (EKF), data from inertial sensors and a camera were fused to estimate the position and orientation of the mobile robot. All these sensors were mounted on the mobile robot to obtain an accurate localization. An indoor experiment was carried out to validate and evaluate the performance. Experimental results show that the proposed method is fast in computation, reliable and robust, and can be considered for practical applications. The performance of the experiments was verified by the ground truth data and root mean square errors (RMSEs).
引用
收藏
页数:22
相关论文
共 73 条
  • [31] Harris C., 1988, ALVEY VISION C, P147, DOI [10.5244/C.2.23, DOI 10.5244/C.2.23]
  • [32] Modeling and Calibration of Inertial and Vision Sensors
    Hol, Jeroen D.
    Schon, Thomas B.
    Gustafsson, Fredrik
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2010, 29 (2-3) : 231 - 244
  • [33] Jaehong Park, 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010), P3778, DOI 10.1109/IROS.2010.5650367
  • [34] Jing C., 2012, P SOC PHOTO-OPT INS, V8349
  • [35] Jinglin S., 2010, P AM CONTR C ACC SAN
  • [36] Detection of Features to Track Objects and Segmentation using GrabCut for Application in Marker-less Augmented Reality
    Khandelwal, Pulkit
    Swarnalatha, P.
    Bisht, Neha
    Prabu, S.
    [J]. SECOND INTERNATIONAL SYMPOSIUM ON COMPUTER VISION AND THE INTERNET (VISIONNET'15), 2015, 58 : 698 - 705
  • [37] Kong F., 2006, The Sixth World Congress on Intelligent Control and Automation, V2, P9242
  • [38] Kruger CP, 2015, 2015 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), P1926, DOI 10.1109/ICIT.2015.7125378
  • [39] An Energy-Efficient Smart Comfort Sensing System Based on the IEEE 1451 Standard for Green Buildings
    Kumar, Anuj
    Hancke, Gerhard P.
    [J]. IEEE SENSORS JOURNAL, 2014, 14 (12) : 4245 - 4252
  • [40] Kumar K., 2014, INT J MULTIMEDIA ITS, V6, P13