SMART: Emerging Activity Recognition with Limited Data for Multi-modal Wearable Sensing

被引:1
|
作者
Ghorpadel, Madhuri [1 ]
Chen, Haiquan [1 ]
Liu, Yuhong [2 ]
Jiang, Zhe [3 ]
机构
[1] Calif State Univ Sacramento, Sacramento, CA 95819 USA
[2] Santa Clara Univ, Santa Clara, CA 95053 USA
[3] Univ Alabama, Tuscaloosa, AL USA
来源
2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA) | 2020年
关键词
Activity Recognition; Semi-supervised Learning; Deep Learning;
D O I
10.1109/BigData50022.2020.9377746
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Activity recognition using ubiquitous wearable devices (e.g., smartphones, smartwatches and sport bracelets) can be applied to many application domains such as healthcare, smart environments, assisted living, human-computer interaction, surveillance etc. Most existing activity recognition approaches require users to provide each activity a sufficient a mount of annotations (labels) in order to achieve acceptable performance and therefore often fail to scale to a large number of activities by recognizing new (emerging) activities. To tackle this limitation, the systems requiring limited training data are much desired. However, existing activity recognition solutions on limited training data rely heavily on low-level activity or attribute extraction and therefore suffer from two major limitations: (1) failing to work well when activities are highly similar to each other, such as jogging, running, and jumping front and back, and (2) leading to overall system performance degradation on recognizing existing activities with sufficient t raining d ata. I n t his paper, we introduce SMART, a unified s emi-supervised f ramework for recognizing highly similar emerging activities without sacrificing the performance on recognizing existing activities. Extensive experiments on real-world data showed that compared to the state of the art, SMART yielded superior performance on recognizing emerging activities, especially highly similar emerging activities, while providing comparable performance on recognizing existing activities.
引用
收藏
页码:1316 / 1321
页数:6
相关论文
共 50 条
  • [1] Multi-Modal Convolutional Neural Networks for Activity Recognition
    Ha, Sojeong
    Yun, Jeong-Min
    Choi, Seungjin
    2015 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2015): BIG DATA ANALYTICS FOR HUMAN-CENTRIC SYSTEMS, 2015, : 3017 - 3022
  • [2] Classifying Imbalanced Multi-modal Sensor Data for Human Activity Recognition in a Smart Home using Deep Learning
    Alani, Ali A.
    Cosma, Georgina
    Taherkhani, Aboozar
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [3] Human Activity Recognition Using Semi-supervised Multi-modal DEC for Instagram Data
    Kim, Dongmin
    Han, Sumin
    Son, Heesuk
    Lee, Dongman
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2020, PT I, 2020, 12084 : 869 - 880
  • [4] MULTI-MODAL REMOTE SENSING DATA FUSION FRAMEWORK
    Ghaffar, M. A. A.
    Vu, T. T.
    Maul, T. H.
    FOSS4G-EUROPE 2017 - ACADEMIC TRACK, 2017, 42-4 (W2): : 85 - 89
  • [5] Autonomous Human Activity Classification From Wearable Multi-Modal Sensors
    Lu, Yantao
    Velipasalar, Senem
    IEEE SENSORS JOURNAL, 2019, 19 (23) : 11403 - 11412
  • [6] Empirical Mode Decomposition Based Multi-Modal Activity Recognition
    Hu, Lingyue
    Zhao, Kailong
    Zhou, Xueling
    Ling, Bingo Wing-Kuen
    Liao, Guozhao
    SENSORS, 2020, 20 (21) : 1 - 15
  • [7] Human Activity Recognition From Multi-modal Wearable Sensor Data Using Deep Multi-stage LSTM Architecture Based on Temporal Feature Aggregation
    Mahmud, Tanvir
    Akash, Shaimur Salehin
    Fattah, Shaikh Anowarul
    Zhu, Wei-Ping
    Ahmad, M. Omair
    2020 IEEE 63RD INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS), 2020, : 249 - 252
  • [8] Multi-modal egocentric activity recognition using multi-kernel learning
    Mehmet Ali Arabacı
    Fatih Özkan
    Elif Surer
    Peter Jančovič
    Alptekin Temizel
    Multimedia Tools and Applications, 2021, 80 : 16299 - 16328
  • [9] Multi-modal egocentric activity recognition using multi-kernel learning
    Arabaci, Mehmet Ali
    Ozkan, Fatih
    Surer, Elif
    Jancovic, Peter
    Temizel, Alptekin
    MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (11) : 16299 - 16328
  • [10] MMHAR-EnsemNet: A Multi-Modal Human Activity Recognition Model
    Das, Avigyan
    Sil, Pritam
    Singh, Pawan Kumar
    Bhateja, Vikrant
    Sarkar, Ram
    IEEE SENSORS JOURNAL, 2021, 21 (10) : 11569 - 11576