DanHAR: Dual Attention Network for multimodal human activity recognition using wearable sensors

被引:121
作者
Gao, Wenbin [1 ]
Zhang, Lei [1 ]
Teng, Qi [1 ]
He, Jun [2 ]
Wu, Hao [3 ]
机构
[1] Nanjing Normal Univ, Sch Elect & Automat Engn, Nanjing 210023, Peoples R China
[2] Nanjing Univ Informat Sci & Technol, Sch Artificial Intelligence, Nanjing 210044, Peoples R China
[3] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650091, Peoples R China
基金
中国国家自然科学基金;
关键词
Human activity recognition; Multimodal sensors; Convolutional neural networks; Residual network; Channel attention; NEURAL-NETWORKS;
D O I
10.1016/j.asoc.2021.107728
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the paper, we present a new dual attention method called DanHAR, which blends channel and temporal attention on residual networks to improve feature representation ability for sensor-based HAR task. Specially, the channel attention plays a key role in deciding what to focus, i.e., sensor modalities, while the temporal attention can focus on the target activity from a long sensor sequence to tell where to focus. Extensive experiments are conducted on four public HAR datasets, as well as weakly labeled HAR dataset. The results show that dual attention mechanism is of central importance for many activity recognition tasks. We obtain 2.02%, 4.20%, 1.95%, 5.22% and 5.00% relative improvement over regular ConvNets respectively on WISDM dataset, UNIMIB SHAR dataset, PAMAP2 dataset, OPPORTUNITY dataset, as well as weakly labeled HAR dataset. The DanHAR is able to surpass other state-of-the-art algorithms at negligible computational overhead. Visualizing analysis is conducted to show that the proposed attention can capture the spatial-temporal dependencies of multimodal sensing data, which amplifies the more important sensor modalities and timesteps during classification. The results are in good agreement with normal human intuition. (C) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页数:12
相关论文
共 41 条
[1]  
Alsheikh M. A., 2016, PROC WORKSHOPS 30 AA, P8
[2]   Window Size Impact in Human Activity Recognition [J].
Banos, Oresti ;
Galvez, Juan-Manuel ;
Damas, Miguel ;
Pomares, Hector ;
Rojas, Ignacio .
SENSORS, 2014, 14 (04) :6474-6499
[3]   A Tutorial on Human Activity Recognition Using Body-Worn Inertial Sensors [J].
Bulling, Andreas ;
Blanke, Ulf ;
Schiele, Bernt .
ACM COMPUTING SURVEYS, 2014, 46 (03)
[4]   GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond [J].
Cao, Yue ;
Xu, Jiarui ;
Lin, Stephen ;
Wei, Fangyun ;
Hu, Han .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, :1971-1980
[5]   The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition [J].
Chavarriaga, Ricardo ;
Sagha, Hesam ;
Calatroni, Alberto ;
Digumarti, Sundara Tejaswi ;
Troester, Gerhard ;
Millan, Jose del R. ;
Roggen, Daniel .
PATTERN RECOGNITION LETTERS, 2013, 34 (15) :2033-2042
[6]  
Chen YP, 2018, ADV NEUR IN, V31
[7]   Dual Attention Network for Scene Segmentation [J].
Fu, Jun ;
Liu, Jing ;
Tian, Haijie ;
Li, Yong ;
Bao, Yongjun ;
Fang, Zhiwei ;
Lu, Hanqing .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3141-3149
[8]   Features and models for human activity recognition [J].
Gonzalez, Silvia ;
Sedano, Javier ;
Villar, Jose R. ;
Corchado, Emilio ;
Herrero, Alvaro ;
Baruque, Bruno .
NEUROCOMPUTING, 2015, 167 :52-60
[9]  
Hammerla N. Y., 2016, IJCAI INT JT C ARTIF, P1533
[10]   Weakly Supervised Human Activity Recognition From Wearable Sensors by Recurrent Attention Learning [J].
He, Jun ;
Zhang, Qian ;
Wang, Liqun ;
Pei, Ling .
IEEE SENSORS JOURNAL, 2019, 19 (06) :2287-2297