SMART: Emerging Activity Recognition with Limited Data for Multi-modal Wearable Sensing

被引:1
|
作者
Ghorpadel, Madhuri [1 ]
Chen, Haiquan [1 ]
Liu, Yuhong [2 ]
Jiang, Zhe [3 ]
机构
[1] Calif State Univ Sacramento, Sacramento, CA 95819 USA
[2] Santa Clara Univ, Santa Clara, CA 95053 USA
[3] Univ Alabama, Tuscaloosa, AL USA
来源
2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA) | 2020年
关键词
Activity Recognition; Semi-supervised Learning; Deep Learning;
D O I
10.1109/BigData50022.2020.9377746
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Activity recognition using ubiquitous wearable devices (e.g., smartphones, smartwatches and sport bracelets) can be applied to many application domains such as healthcare, smart environments, assisted living, human-computer interaction, surveillance etc. Most existing activity recognition approaches require users to provide each activity a sufficient a mount of annotations (labels) in order to achieve acceptable performance and therefore often fail to scale to a large number of activities by recognizing new (emerging) activities. To tackle this limitation, the systems requiring limited training data are much desired. However, existing activity recognition solutions on limited training data rely heavily on low-level activity or attribute extraction and therefore suffer from two major limitations: (1) failing to work well when activities are highly similar to each other, such as jogging, running, and jumping front and back, and (2) leading to overall system performance degradation on recognizing existing activities with sufficient t raining d ata. I n t his paper, we introduce SMART, a unified s emi-supervised f ramework for recognizing highly similar emerging activities without sacrificing the performance on recognizing existing activities. Extensive experiments on real-world data showed that compared to the state of the art, SMART yielded superior performance on recognizing emerging activities, especially highly similar emerging activities, while providing comparable performance on recognizing existing activities.
引用
收藏
页码:1316 / 1321
页数:6
相关论文
共 50 条
  • [41] Multitask Collaborative Multi-modal Remote Sensing Target Segmentation Algorithm
    Mao, Xiuhua
    Zhang, Qiang
    Ruan, Hang
    Yang, Yuang
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2024, 46 (08): : 3363 - 3371
  • [42] Multi-Modal Fusion Transformer for Visual Question Answering in Remote Sensing
    Siebert, Tim
    Clasen, Kai Norman
    Ravanbakhsh, Mahdyar
    Demir, Beguem
    IMAGE AND SIGNAL PROCESSING FOR REMOTE SENSING XXVIII, 2022, 12267
  • [43] Multi-modal zero-shot dynamic hand gesture recognition
    Rastgoo, Razieh
    Kiani, Kourosh
    Escalera, Sergio
    Sabokrou, Mohammad
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 247
  • [44] Advancements in EEG Emotion Recognition: Leveraging Multi-Modal Database Integration
    Roshdy, Ahmed
    Karar, Abdullah
    Al Kork, Samer
    Beyrouthy, Taha
    Nait-ali, Amine
    APPLIED SCIENCES-BASEL, 2024, 14 (06):
  • [45] Semi-supervised Multi-modal Emotion Recognition with Cross-Modal Distribution Matching
    Liang, Jingjun
    Li, Ruichen
    Jin, Qin
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 2852 - 2861
  • [46] Multi-modal vertebrae recognition using Transformed Deep Convolution Network
    Cai, Yunliang
    Landis, Mark
    Laidley, David T.
    Kornecki, Anat
    Lum, Andrea
    Li, Shuo
    COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2016, 51 : 11 - 19
  • [47] DriveSense: A Multi-modal Emotion Recognition and Regulation System for a Car Driver
    Zhu, Lei
    Zhong, Zhinan
    Dai, Wan
    Chen, Yunfei
    Zhang, Yan
    Chen, Mo
    HCI IN MOBILITY, TRANSPORT, AND AUTOMOTIVE SYSTEMS, MOBITAS 2024, PT I, 2024, 14732 : 82 - 97
  • [48] RLITNN: A Multi-Channel Modulation Recognition Model Combining Multi-Modal Features
    Luo, Zhongqiang
    Xiao, Wenshi
    Zhang, Xueqin
    Zhu, Lidong
    Xiong, Xingzhong
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2024, 23 (12) : 19083 - 19097
  • [49] Research on Synthetic Data Generation and Multi-modal Target Detection
    Liu, Jiexi
    Wang, Chenyu
    Xu, Lei
    2022 34TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2022, : 5699 - 5703
  • [50] A Survey on Semantic Communications System Based on Multi-modal Data
    Win, Thwe Thwe
    Won, Dongwook
    Do, Quang Tuan
    Oh, Junsuk
    Hien, Pham Thi Thu
    Paek, Jeongyeup
    Cho, Sungrae
    38TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING, ICOIN 2024, 2024, : 214 - 217