CSI-Based Location-Independent Human Activity Recognition by Contrast Between Dual Stream Fusion Features

被引:0
|
作者
Wang, Yujie [1 ]
Yu, Guangwei [2 ]
Zhang, Yong [2 ]
Liu, Dun [2 ]
Zhang, Yang [3 ]
机构
[1] Univ Sci & Technol Beijing, Sch Comp & Commun Engn, Beijing 100083, Peoples R China
[2] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei 230001, Peoples R China
[3] Univ Manchester, Sch Comp Sci, Manchester M13 9PL, England
基金
中国国家自然科学基金;
关键词
Contrastive learning; channel state information (CSI); feature fusion; recognition;
D O I
10.1109/JSEN.2024.3504005
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Due to the fact that channel state information (CSI) data contains activity and environmental information, the features of the same activity vary significantly across different locations. Existing CSI-based human activity recognition (HAR) systems achieve high recognition accuracy at training locations through mechanisms such as transfer learning and few-shot learning when learning new activities. However, they struggle to maintain accurate activity recognition at other locations. In this article, we propose a contrastive fusion feature-based location-independent HAR (CFLH) system to address this issue. Unlike existing methods that simultaneously train feature extractor and fully connected layer classifier, CFLH system decouples the training of the feature extractor and classifier. It only requires obtaining loss through contrastive learning at the feature level to optimize the feature extractor. CFLH system randomly scales activity signals in the temporal dimension to enrich intra and interclass features across different locations, constructing positive samples. Using labels, samples from different activity categories are treated as negative samples to expand interclass feature differences. For more effective activity feature extraction, CFLH system employs a two-tower transformer to extract temporal and channel-stream features. These two features are then fused into a dual-stream fusion feature using an attention and residual-based fusion module (AR-Fusion). Experimental results show that using samples of three activities from 12 points to train the feature extractor, and adding samples of three new activities at training points to train the classifier, the highest recognition accuracy for three new and old activities at the testing location reaches 94.48% and 95.71%, respectively.
引用
收藏
页码:4897 / 4907
页数:11
相关论文
共 47 条
  • [31] A Real-time Object Detection for WiFi CSI-based Multiple Human Activity Recognition
    Elujide, Israel
    Li, Jian
    Shiran, Aref
    Zhou, Siwang
    Liu, Yonghe
    2023 IEEE 20TH CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2023,
  • [32] Multi-User Human Activity Recognition Through Adaptive Location-Independent WiFi Signal Characteristics
    Abuhoureyah, Fahd
    Sim, Kok Swee
    Wong, Yan Chiew
    IEEE ACCESS, 2024, 12 : 112008 - 112024
  • [33] Data Augmentation Techniques for Cross-Domain WiFi CSI-Based Human Activity Recognition
    Strohmayer, Julian
    Kampel, Martin
    ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS, PT I, AIAI 2024, 2024, 711 : 42 - 56
  • [34] WiFi CSI-Based Long-Range Through-Wall Human Activity Recognition with the ESP32
    Strohmayer, Julian
    Kampel, Martin
    COMPUTER VISION SYSTEMS, ICVS 2023, 2023, 14253 : 41 - 50
  • [35] Radar-Based Human Activity Recognition Using Dual-Stream Spatial and Temporal Feature Fusion Network
    Li, Jianjun
    Xu, Hongji
    Zeng, Jiaqi
    Ai, Wentao
    Li, Shijie
    Li, Xiaoman
    Li, Xinya
    IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2024, 60 (02) : 1835 - 1847
  • [36] Lightweight and Efficient CSI-Based Human Activity Recognition via Bayesian Optimization-Guided Architecture Search and Structured Pruning
    Youm, Sungkwan
    Go, Sunghyun
    APPLIED SCIENCES-BASEL, 2025, 15 (02):
  • [37] CSI-Based Human Activity Recognition Using Multi-Input Multi-Output Autoencoder and Fine-Tuning
    Chahoushi, Mahnaz
    Nabati, Mohammad
    Asvadi, Reza
    Ghorashi, Seyed Ali
    SENSORS, 2023, 23 (07)
  • [38] Human activity recognition based on independent depth silhouette components and optical flow features
    Uddin, Md. Zia
    Kim, Tae-Seong
    IMAGING SCIENCE JOURNAL, 2012, 60 (03): : 138 - 150
  • [39] Dual-Stream Contrastive Learning for Channel State Information Based Human Activity Recognition
    Xu, Ke
    Wang, Jiangtao
    Zhang, Le
    Zhu, Hongyuan
    Zheng, Dingchang
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2023, 27 (01) : 329 - 338
  • [40] Using Convolutional Layer Features for Indoor Human Activity Recognition based on Spatial Location Information
    Li, Jun
    Zhao, Jiaxiang
    Li, Jing
    Ma, Yingdong
    INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND APPLICATION ENGINEERING (CSAE), 2017, 190 : 759 - 766