SMART: Emerging Activity Recognition with Limited Data for Multi-modal Wearable Sensing

被引:1
|
作者
Ghorpadel, Madhuri [1 ]
Chen, Haiquan [1 ]
Liu, Yuhong [2 ]
Jiang, Zhe [3 ]
机构
[1] Calif State Univ Sacramento, Sacramento, CA 95819 USA
[2] Santa Clara Univ, Santa Clara, CA 95053 USA
[3] Univ Alabama, Tuscaloosa, AL USA
来源
2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA) | 2020年
关键词
Activity Recognition; Semi-supervised Learning; Deep Learning;
D O I
10.1109/BigData50022.2020.9377746
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Activity recognition using ubiquitous wearable devices (e.g., smartphones, smartwatches and sport bracelets) can be applied to many application domains such as healthcare, smart environments, assisted living, human-computer interaction, surveillance etc. Most existing activity recognition approaches require users to provide each activity a sufficient a mount of annotations (labels) in order to achieve acceptable performance and therefore often fail to scale to a large number of activities by recognizing new (emerging) activities. To tackle this limitation, the systems requiring limited training data are much desired. However, existing activity recognition solutions on limited training data rely heavily on low-level activity or attribute extraction and therefore suffer from two major limitations: (1) failing to work well when activities are highly similar to each other, such as jogging, running, and jumping front and back, and (2) leading to overall system performance degradation on recognizing existing activities with sufficient t raining d ata. I n t his paper, we introduce SMART, a unified s emi-supervised f ramework for recognizing highly similar emerging activities without sacrificing the performance on recognizing existing activities. Extensive experiments on real-world data showed that compared to the state of the art, SMART yielded superior performance on recognizing emerging activities, especially highly similar emerging activities, while providing comparable performance on recognizing existing activities.
引用
收藏
页码:1316 / 1321
页数:6
相关论文
共 50 条
  • [31] Multi-modal haptic image recognition based on deep learning
    Han, Dong
    Nie, Hong
    Chen, Jinbao
    Chen, Meng
    Deng, Zhen
    Zhang, Jianwei
    SENSOR REVIEW, 2018, 38 (04) : 486 - 493
  • [32] InstaIndoor and multi-modal deep learning for indoor scene recognition
    Glavan, Andreea
    Talavera, Estefania
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (09) : 6861 - 6877
  • [33] Correspondence Learning for Deep Multi-Modal Recognition and Fraud Detection
    Park, Jongchan
    Kim, Min-Hyun
    Choi, Dong-Geol
    ELECTRONICS, 2021, 10 (07)
  • [34] Multi-modal fusion network with complementarity and importance for emotion recognition
    Liu, Shuai
    Gao, Peng
    Li, Yating
    Fu, Weina
    Ding, Weiping
    INFORMATION SCIENCES, 2023, 619 : 679 - 694
  • [35] Geological Body Recognition Based on Multi-Modal Feature Fusion
    Fu S.
    Li C.
    Zhang H.
    Liu C.
    Li F.
    Diqiu Kexue - Zhongguo Dizhi Daxue Xuebao/Earth Science - Journal of China University of Geosciences, 2023, 48 (10): : 3743 - 3752
  • [36] Multi-modal learning and its application for biomedical data
    Liu, Jin
    Zhang, Yu-Dong
    Cai, Hongming
    FRONTIERS IN MEDICINE, 2024, 10
  • [37] Multi-Task and Multi-Modal Learning for RGB Dynamic Gesture Recognition
    Fan, Dinghao
    Lu, Hengjie
    Xu, Shugong
    Cao, Shan
    IEEE SENSORS JOURNAL, 2021, 21 (23) : 27026 - 27036
  • [38] A Memory-friendly Multi-Modal Emotion Analysis for Smart Toy
    Poon, Geoffrey
    Li, Ki-Mei
    Pang, Wai-Man
    2017 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM), 2017, : 432 - 437
  • [39] Online Learning of Wearable Sensing for Human Activity Recognition
    Zhang, Yiwei
    Gao, Bin
    Yang, Daili
    Woo, Wai Lok
    Wen, Houlai
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (23) : 24315 - 24327
  • [40] Data-Driven Design for IRS-Assisted Massive Multi-Modal Sensing in Industry 5.0
    Haider, Majumder
    Ahmed, Imtiaz
    Lashhab, Fadel
    O'Shea, Tim
    Rawat, Danda B.
    Matin, Mohammad
    2024 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS WORKSHOPS, ICC WORKSHOPS 2024, 2024, : 632 - 637