Transformer-based deep reverse attention network for multi-sensory human activity recognition

被引:12
作者
Pramanik, Rishav [1 ]
Sikdar, Ritodeep [1 ]
Sarkar, Ram [1 ]
机构
[1] Jadavpur Univ, Dept Comp Sci & Engn, Kolkata 700032, West Bengal, India
关键词
Deep learning; Reverse attention; Human activity recognition; Time-series prediction; Sensor data; ENSEMBLE;
D O I
10.1016/j.engappai.2023.106150
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In today's era, one of the important applications of Artificial Intelligence (AI) is Human Activity Recognition (HAR). It has a wide range of applicability in health monitoring for patients with chronic diseases, gaming consoles for gesture recognition, etc. Sensor-based HAR systems use signals collected over a period of time to label an activity. When we design an efficient sensor-based HAR system, a model requires learning an optimal association of spatial and temporal features. In this article, we propose a sensor-based HAR technique using the deep learning approach. We present a deep reverse transformer-based attention mechanism to guide the side residual features Unlike the conventional bottom-up approaches for feature fusion, we exploit a top-down feature fusion approach. The reverse attention is self-calibrated throughout the course of learning, which regularizes the attention modules and dynamically adjusts the learning rate. The overall framework outperforms several state-of-the-art methods and is shown to be statistically significant against these methods on five publicly available sensor-based HAR datasets, namely, MHEALTH, USC-HAD, WHARF, UTD-MHAD1, and UTD-MHAD2. Further, we conduct an ablation study to showcase the importance of each of the components of the proposed framework. Source code of this work is available at https://github.com/rishavpramanik/ RevTransformerAttentionHAR.
引用
收藏
页数:12
相关论文
共 50 条
[41]   Human Activity Recognition Method Based on FMCW Radar Sensor with Multi-Domain Feature Attention Fusion Network [J].
Cao, Lin ;
Liang, Song ;
Zhao, Zongmin ;
Wang, Dongfeng ;
Fu, Chong ;
Du, Kangning .
SENSORS, 2023, 23 (11)
[42]   MMTSA: Multi-Modal Temporal Segment Attention Network for Efficient Human Activity Recognition [J].
Gao, Ziqi ;
Wang, Yuntao ;
Chen, Jianguo ;
Xing, Junliang ;
Patel, Shwetak ;
Liu, Xin ;
Shi, Yuanchun .
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2023, 7 (03)
[43]   Multihead-Res-SE Residual Network with Attention for Human Activity Recognition [J].
Kang, Hongbo ;
Lv, Tailong ;
Yang, Chunjie ;
Wang, Wenqing .
ELECTRONICS, 2024, 13 (17)
[44]   A transformer-based deep neural network for arrhythmia detection using continuous ECG signals [J].
Hu, Rui ;
Chen, Jie ;
Zhou, Li .
COMPUTERS IN BIOLOGY AND MEDICINE, 2022, 144
[45]   Transformative Noise Reduction: Leveraging a Transformer-Based Deep Network for Medical Image Denoising [J].
Naqvi, Rizwan Ali ;
Haider, Amir ;
Kim, Hak Seob ;
Jeong, Daesik ;
Lee, Seung-Won .
MATHEMATICS, 2024, 12 (15)
[46]   Cervical lesion segmentation via transformer-based network with attention and boundary-aware modules [J].
Gao, Huayu ;
Li, Jing ;
Shen, Nanyan ;
Lu, Wei ;
Ma, Juanjuan ;
Yang, Ying .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 109
[47]   TBiSeg: A transformer-based network with bi-level routing attention for inland waterway segmentation [J].
Fu, Chuanmao ;
Li, Meng ;
Zhang, Bo ;
Wang, Hongbo .
OCEAN ENGINEERING, 2024, 311
[48]   STMultiple: Sparse Transformer Based on RFID for Multi-Object Activity Recognition [J].
Shen, Shunwen ;
Yang, Mulan ;
Xuehan, Hou ;
Yang, Lvqing ;
Chen, Sien ;
Dong, Wensheng ;
Yu, Bo ;
Wang, Qingkai .
INTERNATIONAL JOURNAL OF SOFTWARE ENGINEERING AND KNOWLEDGE ENGINEERING, 2023, 33 (11N12) :1813-1833
[49]   A human activity recognition method based on Vision Transformer [J].
Han, Huiyan ;
Zeng, Hongwei ;
Kuang, Liqun ;
Han, Xie ;
Xue, Hongxin .
SCIENTIFIC REPORTS, 2024, 14 (01)
[50]   Human Activity Recognition based on Transformer in Smart Home [J].
Huang, Xinmei ;
Zhang, Sheng .
2023 2ND ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING, CACML 2023, 2023, :520-525