Convolution-enhanced vision transformer method for lower limb exoskeleton locomotion mode recognition

被引:0
作者
Zheng, Jianbin [1 ]
Wang, Chaojie [1 ]
Huang, Liping [1 ]
Gao, Yifan [1 ]
Yan, Ruoxi [1 ]
Yang, Chunbo [1 ]
Gao, Yang [1 ]
Wang, Yu [1 ]
机构
[1] Wuhan Univ Technol, Sch Informat Engn, Wuhan, Hubei, Peoples R China
关键词
Conv-ViT (convolution-enhanced vision transformer); exoskeleton robot; locomotion mode recognition; locomotion transitions; INTENT RECOGNITION; CLASSIFICATION;
D O I
10.1111/exsy.13659
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Providing the human body with smooth and natural assistance through lower limb exoskeletons is crucial. However, a significant challenge is identifying various locomotion modes to enable the exoskeleton to offer seamless support. In this study, we propose a method for locomotion mode recognition named Convolution-enhanced Vision Transformer (Conv-ViT). This method maximizes the benefits of convolution for feature extraction and fusion, as well as the self-attention mechanism of the Transformer, to efficiently capture and handle long-term dependencies among different positions within the input sequence. By equipping the exoskeleton with inertial measurement units, we collected motion data from 27 healthy subjects, using it as input to train the Conv-ViT model. To ensure the exoskeleton's stability and safety during transitions between various locomotion modes, we not only examined the typical five steady modes (involving walking on level ground [WL], stair ascent [SA], stair descent [SD], ramp ascent [RA], and ramp descent [RD]) but also extensively explored eight locomotion transitions (including WL-SA, WL-SD, WL-RA, WL-RD, SA-WL, SD-WL, RA-WL, RD-WL). In tasks involving the recognition of five steady locomotions and eight transitions, the recognition accuracy reached 98.87% and 96.74%, respectively. Compared with three popular algorithms, ViT, convolutional neural networks, and support vector machine, the results show that the proposed method has the best recognition performance, and there are highly significant differences in accuracy and F1 score compared to other methods. Finally, we also demonstrated the excellent performance of Conv-ViT in terms of generalization performance.
引用
收藏
页数:22
相关论文
共 54 条
  • [21] A Gait Phase Detection Method in Complex Environment Based on DTW-Mean Templates
    Huang, Liping
    Zheng, Jianbin
    Hu, Huacheng
    [J]. IEEE SENSORS JOURNAL, 2021, 21 (13) : 15114 - 15123
  • [22] Online Gait Phase Detection in Complex Environment Based on Distance and Multi-Sensors Information Fusion Using Inertial Measurement Units
    Huang, Liping
    Zheng, Jianbin
    Hu, Huacheng
    [J]. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2022, 14 (02) : 413 - 428
  • [23] Real-Time Intended Knee Joint Motion Prediction by Deep-Recurrent Neural Networks
    Huang, Yongchuang
    He, Zexia
    Liu, Yuxuan
    Yang, Ruiyuan
    Zhang, Xiufeng
    Cheng, Guang
    Yi, Jingang
    Ferreira, Joao Paulo
    Liu, Tao
    [J]. IEEE SENSORS JOURNAL, 2019, 19 (23) : 11503 - 11509
  • [24] Lower Limb Wearable Robots for Assistance and Rehabilitation: A State of the Art
    Huo, Weiguang
    Mohammed, Samer
    Moreno, Juan C.
    Amirat, Yacine
    [J]. IEEE SYSTEMS JOURNAL, 2016, 10 (03): : 1068 - 1081
  • [25] Classification of Three Types of Walking Activities Regarding Stairs Using Plantar Pressure Sensors
    Jeong, Gu-Min
    Phuc Huu Truong
    Choi, Sang-Il
    [J]. IEEE SENSORS JOURNAL, 2017, 17 (09) : 2638 - 2639
  • [26] A Muscle Synergy-Inspired Method of Detecting Human Movement Intentions Based on Wearable Sensor Fusion
    Liu, Yi-Xing
    Wang, Ruoli
    Gutierrez-Farewik, Elena M.
    [J]. IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2021, 29 : 1089 - 1098
  • [27] Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
    Liu, Ze
    Lin, Yutong
    Cao, Yue
    Hu, Han
    Wei, Yixuan
    Zhang, Zheng
    Lin, Stephen
    Guo, Baining
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9992 - 10002
  • [28] Intent Pattern Recognition of Lower-limb Motion Based on Mechanical Sensors
    Liu, Zuojun
    Lin, Wei
    Geng, Yanli
    Yang, Peng
    [J]. IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2017, 4 (04) : 651 - 660
  • [29] Wearable Sensor-Based Human Activity Recognition with Hybrid Deep Learning Model
    Luwe, Yee Jia
    Lee, Chin Poo
    Lim, Kian Ming
    [J]. INFORMATICS-BASEL, 2022, 9 (03):
  • [30] Mahajan D K Girshick R. B., 2018, ARXIV