A Hierarchical-Based Learning Approach for Multi-Action Intent Recognition

被引:0
|
作者
Hollinger, David [1 ]
Pollard, Ryan S. [1 ]
Schall Jr, Mark C. [2 ]
Chen, Howard [3 ]
Zabala, Michael [1 ]
机构
[1] Auburn Univ, Dept Mech Engn, Auburn, AL 36849 USA
[2] Auburn Univ, Dept Ind & Syst Engn, Auburn, AL 36849 USA
[3] Univ Alabama, Dept Ind & Syst Engn & Engn Management, Huntsville, AL 35899 USA
关键词
wearable sensors; accelerometers; gyroscopes; movement intent prediction;
D O I
10.3390/s24237857
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Recent applications of wearable inertial measurement units (IMUs) for predicting human movement have often entailed estimating action-level (e.g., walking, running, jumping) and joint-level (e.g., ankle plantarflexion angle) motion. Although action-level or joint-level information is frequently the focus of movement intent prediction, contextual information is necessary for a more thorough approach to intent recognition. Therefore, a combination of action-level and joint-level information may offer a more comprehensive approach to predicting movement intent. In this study, we devised a novel hierarchical-based method combining action-level classification and subsequent joint-level regression to predict joint angles 100 ms into the future. K-nearest neighbors (KNN), bidirectional long short-term memory (BiLSTM), and temporal convolutional network (TCN) models were employed for action-level classification, and a random forest model trained on action-specific IMU data was used for joint-level prediction. A joint-level action-generic model trained on multiple actions (e.g., backward walking, kneeling down, kneeling up, running, and walking) was also used for predicting the joint angle. Compared with a hierarchical-based approach, the action-generic model had lower prediction error for backward walking, kneeling down, and kneeling up. Although the TCN and BiLSTM classifiers achieved classification accuracies of 89.87% and 89.30%, respectively, they did not surpass the performance of the action-generic random forest model when used in combination with an action-specific random forest model. This may have been because the action-generic approach was trained on more data from multiple actions. This study demonstrates the advantage of leveraging large, disparate data sources over a hierarchical-based approach for joint-level prediction. Moreover, it demonstrates the efficacy of an IMU-driven, task-agnostic model in predicting future joint angles across multiple actions.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] Learning Hierarchical Context for Action Recognition in Still Images
    Zhu, Haisheng
    Hu, Jian-Fang
    Zheng, Wei-Shi
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING, PT III, 2018, 11166 : 67 - 77
  • [32] STREAMING INFLUENCE MAXIMIZATION IN SOCIAL NETWORKS BASED ON MULTI-ACTION CREDIT DISTRIBUTION
    Yu, Qilian
    Li, Hang
    Liao, Yun
    Cui, Shuguang
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 6378 - 6382
  • [33] Multi-Action Planning for Threat Management: A Novel Approach for the Spatial Prioritization of Conservation Actions
    Cattarino, Lorenzo
    Hermoso, Virgilio
    Carwardine, Josie
    Kennard, Mark J.
    Linke, Simon
    PLOS ONE, 2015, 10 (05):
  • [34] A compact discriminant hierarchical clustering approach for action recognition
    Tong, Ming
    Tian, Weijuan
    Wang, Houyi
    Wang, Fan
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (06) : 7539 - 7564
  • [35] A compact discriminant hierarchical clustering approach for action recognition
    Ming Tong
    Weijuan Tian
    Houyi Wang
    Fan Wang
    Multimedia Tools and Applications, 2018, 77 : 7539 - 7564
  • [36] Robust Action Recognition Based on a Hierarchical Model
    Jiang, Xinbo
    Zhong, Fan
    Peng, Qunsheng
    Qin, Xueying
    2013 INTERNATIONAL CONFERENCE ON CYBERWORLDS (CW), 2013, : 191 - 198
  • [37] Hierarchical Consistent Contrastive Learning for Skeleton-Based Action Recognition with Growing Augmentations
    Zhang, Jiahang
    Lin, Lilang
    Liu, Jiaying
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3, 2023, : 3427 - 3435
  • [38] Multi-GAT: A Graphical Attention-Based Hierarchical Multimodal Representation Learning Approach for Human Activity Recognition
    Islam, Md Mofijul
    Iqbal, Tariq
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2021, 6 (02) : 1729 - 1736
  • [39] A Prompt Learning Based Intent Recognition Method on a Chinese Implicit Intent Dataset CIID
    Liu, Shuhua
    Li, Lanting
    Fang, Ming
    Hung, Chih-Cheng
    Yang, Shihao
    NEURAL PROCESSING LETTERS, 2023, 55 (08) : 11017 - 11034
  • [40] A Prompt Learning Based Intent Recognition Method on a Chinese Implicit Intent Dataset CIID
    Shuhua Liu
    Lanting Li
    Ming Fang
    Chih-Cheng Hung
    Shihao Yang
    Neural Processing Letters, 2023, 55 : 11017 - 11034