ATVIO: ATTENTION GUIDED VISUAL-INERTIAL ODOMETRY

被引:11
|
作者
Liu, Li [1 ]
Li, Ge [1 ]
Li, Thomas H. [2 ]
机构
[1] Peking Univ, Sch Elect & Comp Engn, Shenzhen Grad Sch, Beijing, Peoples R China
[2] Peking Univ, Adv Inst Informat Technol, Beijing, Peoples R China
来源
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021) | 2021年
关键词
Visual-Inertial Odometry; Attention; Feature Fusion; IMU; VERSATILE;
D O I
10.1109/ICASSP39728.2021.9413912
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Visual-inertial odometry (VIO) aims to predict trajectory by ego-motion estimation. In recent years, end-to-end VIO has made great progress. However, how to handle visual and inertial measurements and make full use of the complementarity of cameras and inertial sensors remains a challenge. In the paper, we propose a novel attention guided deep framework for visual-inertial odometry (ATVIO) to improve the performance of VIO. Specifically, we extraordinarily concentrate on the effective utilization of the Inertial Measurement Unit (IMU) information. Therefore, we carefully design a one-dimension inertial feature encoder for IMU data processing. The network can extract inertial features quickly and effectively. Meanwhile, we should prevent the inconsistency problem when fusing inertial and visual features. Hence, we explore a novel cross-domain channel attention block to combine the extracted features in a more adaptive manner. Extensive experiments demonstrate that our method achieves competitive performance against state-of-the-art VIO methods.
引用
收藏
页码:4125 / 4129
页数:5
相关论文
共 50 条
  • [1] Robocentric Visual-Inertial Odometry
    Huai, Zheng
    Huang, Guoquan
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 6319 - 6326
  • [2] Robocentric visual-inertial odometry
    Huai, Zheng
    Huang, Guoquan
    INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2022, 41 (07): : 667 - 689
  • [3] Cooperative Visual-Inertial Odometry
    Zhu, Pengxiang
    Yang, Yulin
    Ren, Wei
    Huang, Guoquan
    2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 13135 - 13141
  • [4] Compass aided visual-inertial odometry
    Wang, Yandong
    Zhang, Tao
    Wang, Yuanchao
    Ma, Jingwei
    Li, Yanhui
    Han, Jingzhuang
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2019, 60 : 101 - 115
  • [5] Information Sparsification in Visual-Inertial Odometry
    Hsiung, Jerry
    Hsiao, Ming
    Westman, Eric
    Valencia, Rafael
    Kaess, Michael
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 1146 - 1153
  • [6] EMA-VIO: Deep Visual-Inertial Odometry With External Memory Attention
    Tu, Zheming
    Chen, Changhao
    Pan, Xianfei
    Liu, Ruochen
    Cui, Jiarui
    Mao, Jun
    IEEE SENSORS JOURNAL, 2022, 22 (21) : 20877 - 20885
  • [7] A Partial Sparsification Scheme for Visual-Inertial Odometry
    Zhu, Zhikai
    Wang, Wei
    2020 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM), 2020, : 1983 - 1989
  • [8] Monocular Visual-Inertial Odometry for Agricultural Environments
    Song, Kaiyu
    Li, Jingtao
    Qiu, Run
    Yang, Gaidi
    IEEE Access, 2022, 10 : 103975 - 103986
  • [9] ADVIO: An Authentic Dataset for Visual-Inertial Odometry
    Cortes, Santiago
    Solin, Arno
    Rahtu, Esa
    Kannala, Juho
    COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 : 425 - 440
  • [10] Unsupervised Monocular Visual-inertial Odometry Network
    Wei, Peng
    Hua, Guoliang
    Huang, Weibo
    Meng, Fanyang
    Liu, Hong
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2347 - 2354