SMTDKD: A Semantic-Aware Multimodal Transformer Fusion Decoupled Knowledge Distillation Method for Action Recognition

被引:0
作者
Quan, Zhenzhen [1 ]
Chen, Qingshan [1 ]
Wang, Wei [1 ]
Zhang, Moyan [1 ]
Li, Xiang [1 ]
Li, Yujun [1 ]
Liu, Zhi [1 ]
机构
[1] Shandong Univ, Sch Informat Sci & Engn, Qingdao 266237, Shandong, Peoples R China
关键词
Transformers; Sensors; Feature extraction; Wearable sensors; Visualization; Semantics; Knowledge engineering; Keywords Decoupled knowledge distillation; human action recognition (HAR); multimodal; transformer; wearable sensor; VISION;
D O I
10.1109/JSEN.2023.3337367
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Multimodal sensors, including vision sensors and wearable sensors, offer valuable complementary information for accurate recognition tasks. Nonetheless, the heterogeneity among sensor data from different modalities presents a formidable challenge in extracting robust multimodal information amidst noise. In this article, we propose an innovative approach, named semantic-aware multimodal transformer fusion decoupled knowledge distillation (SMTDKD) method, which guides video data recognition not only through the information interaction between different wearable-sensor data, but also through the information interaction between visual sensor data and wearable-sensor data, improving the robustness of the model. To preserve the temporal relationship within wearable-sensor data, the SMTDKD method converts them into 2-D image data. Furthermore, a transformer-based multimodal fusion module is designed to capture diverse feature information from distinct wearable-sensor modalities. To mitigate modality discrepancies and encourage similar semantic features, graph cross-view attention maps are constructed across various convolutional layers to facilitate feature alignment. Additionally, semantic information is exchanged among the teacher-student network, the student network, and bidirectional encoder representations from transformer (BERT)-encoded labels. To obtain more comprehensive knowledge transfer, the decoupled knowledge distillation loss is utilized, thus enhancing the generalization of the network. Experimental evaluations conducted on three multimodal datasets, namely, UTD-MHAD, Berkeley-MHAD, and MMAct, demonstrate the superior performance of the proposed SMTDKD method over the state-of-the-art action human recognition methods.
引用
收藏
页码:2289 / 2304
页数:16
相关论文
共 81 条
  • [1] ViViT: A Video Vision Transformer
    Arnab, Anurag
    Dehghani, Mostafa
    Heigold, Georg
    Sun, Chen
    Lucic, Mario
    Schmid, Cordelia
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6816 - 6826
  • [2] Ba LJ, 2014, ADV NEUR IN, V27
  • [3] A Real-Time Human Action Recognition System Using Depth and Inertial Sensor Fusion
    Chen, Chen
    Jafari, Roozbeh
    Kehtarnavaz, Nasser
    [J]. IEEE SENSORS JOURNAL, 2016, 16 (03) : 773 - 781
  • [4] Chen C, 2015, IEEE IMAGE PROC, P168, DOI 10.1109/ICIP.2015.7350781
  • [5] Chen K, 2018, INT JOINT C NEURAL N, P1
  • [6] Chung F. R., 1997, Spectral Graph Theory, V92
  • [7] Pose-Appearance Relational Modeling for Video Action Recognition
    Cui, Mengmeng
    Wang, Wei
    Zhang, Kunbo
    Sun, Zhenan
    Wang, Liang
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 295 - 308
  • [8] Data Augmentation in Deep Learning-Based Fusion of Depth and Inertial Sensing for Action Recognition
    Dawar, Neha
    Ostadabbas, Sarah
    Kehtarnavaz, Nasser
    [J]. IEEE SENSORS LETTERS, 2019, 3 (01)
  • [9] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [10] Dosovitskiy A., 2021, INT C LEARN REPRESEN, P1