TAHAR: A Transferable Attention-Based Adversarial Network for Human Activity Recognition with RFID

被引:2
作者
Chen, Dinghao [1 ]
Yang, Lvqing [1 ]
Cao, Hua [2 ]
Wang, Qingkai [3 ]
Dong, Wensheng [4 ]
Yu, Bo [4 ]
机构
[1] Xiamen Univ, Sch Informat, Xiamen 361005, Peoples R China
[2] Fujian Med Univ, Union Hosp, Dept Cardiovasc Surg, Fuzhou 350004, Peoples R China
[3] Beijing Key Lab Proc Automat Min & Met, Beijing 100160, Peoples R China
[4] Zijin Zhixin Xiamen Technol Co Ltd, Xiamen 361000, Peoples R China
来源
ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, ICIC 2023, PT II | 2023年 / 14087卷
基金
国家重点研发计划;
关键词
RFID; human activity recognition; attention network;
D O I
10.1007/978-981-99-4742-3_20
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human activity recognition based on radio frequency identification (RFID) has become an essential part of Internet of Things. At present, most RFID-based human activity recognition research is focused on domain-specific recognition. However, existing solutions usually match global features for domain adaptation when solving cross-domain problems, and lack consideration of untransferable features, which lead to degradation of recognition accuracy. This paper proposes a novel human activity recognition model TAHAR which adapts transferable attention and adversarial learning to eliminate the negative influence of untransferable features and domain-specific features. In TAHAR, the feature extractor extracts spatio-temporal information of phase and Received Signal Strength Indicator from the RFID signal. Then we utilize an attention module to weight features to minimize the influence of untransferable features and negative transfer. Additionally, discriminators and batch spectral penalization are used to remove domain-specific information, thereby enhancing the transferability and discriminability. Results show that TAHAR achieves an accuracy and recall of 90.17% and 85.46% respectively, achieving an outstanding performance compared with several state-of-the-art methods.
引用
收藏
页码:247 / 259
页数:13
相关论文
共 20 条
[1]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[2]   Deep Learning for Sensor-based Human Activity Recognition: Overview, Challenges, and Opportunities [J].
Chen, Kaixuan ;
Zhang, Dalin ;
Yao, Lina ;
Guo, Bin ;
Yu, Zhiwen ;
Liu, Yunhao .
ACM COMPUTING SURVEYS, 2021, 54 (04)
[3]  
Chen XY, 2019, PR MACH LEARN RES, V97
[4]  
Donahue J, 2014, PR MACH LEARN RES, V32
[5]  
Ganin Y, 2015, PR MACH LEARN RES, V37, P1180
[6]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144
[7]   Dual-AI: Dual-path Actor Interaction Learning for Group Activity Recognition [J].
Han, Mingfei ;
Zhang, David Junhao ;
Wang, Yali ;
Yan, Rui ;
Yao, Lina ;
Chang, Xiaojun ;
Qiao, Yu .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :2980-2989
[8]   Support vector machines [J].
Hearst, MA .
IEEE INTELLIGENT SYSTEMS & THEIR APPLICATIONS, 1998, 13 (04) :18-21
[9]   ANALYSIS OF THE PHASE UNWRAPPING ALGORITHM [J].
ITOH, K .
APPLIED OPTICS, 1982, 21 (14) :2470-2470
[10]  
Li Xinyu, 2016, Proc Int Conf Embed Netw Sens Syst, V2016, P164, DOI 10.1145/2994551.2994569