SMART: Emerging Activity Recognition with Limited Data for Multi-modal Wearable Sensing

被引:1
|
作者
Ghorpadel, Madhuri [1 ]
Chen, Haiquan [1 ]
Liu, Yuhong [2 ]
Jiang, Zhe [3 ]
机构
[1] Calif State Univ Sacramento, Sacramento, CA 95819 USA
[2] Santa Clara Univ, Santa Clara, CA 95053 USA
[3] Univ Alabama, Tuscaloosa, AL USA
来源
2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA) | 2020年
关键词
Activity Recognition; Semi-supervised Learning; Deep Learning;
D O I
10.1109/BigData50022.2020.9377746
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Activity recognition using ubiquitous wearable devices (e.g., smartphones, smartwatches and sport bracelets) can be applied to many application domains such as healthcare, smart environments, assisted living, human-computer interaction, surveillance etc. Most existing activity recognition approaches require users to provide each activity a sufficient a mount of annotations (labels) in order to achieve acceptable performance and therefore often fail to scale to a large number of activities by recognizing new (emerging) activities. To tackle this limitation, the systems requiring limited training data are much desired. However, existing activity recognition solutions on limited training data rely heavily on low-level activity or attribute extraction and therefore suffer from two major limitations: (1) failing to work well when activities are highly similar to each other, such as jogging, running, and jumping front and back, and (2) leading to overall system performance degradation on recognizing existing activities with sufficient t raining d ata. I n t his paper, we introduce SMART, a unified s emi-supervised f ramework for recognizing highly similar emerging activities without sacrificing the performance on recognizing existing activities. Extensive experiments on real-world data showed that compared to the state of the art, SMART yielded superior performance on recognizing emerging activities, especially highly similar emerging activities, while providing comparable performance on recognizing existing activities.
引用
收藏
页码:1316 / 1321
页数:6
相关论文
共 50 条
  • [21] ATTENTION DRIVEN FUSION FOR MULTI-MODAL EMOTION RECOGNITION
    Priyasad, Darshana
    Fernando, Tharindu
    Denman, Simon
    Sridharan, Sridha
    Fookes, Clinton
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 3227 - 3231
  • [22] Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review
    Zhang, Jianhua
    Yin, Zhong
    Chen, Peng
    Nichele, Stefano
    INFORMATION FUSION, 2020, 59 : 103 - 126
  • [23] A comprehensive video dataset for multi-modal recognition systems
    Handa A.
    Agarwal R.
    Kohli N.
    Data Science Journal, 2019, 18 (01):
  • [24] Multi-modal Emotion Recognition for Determining Employee Satisfaction
    Zaman, Farhan Uz
    Zaman, Maisha Tasnia
    Alam, Md Ashraful
    Alam, Md Golam Rabiul
    2021 IEEE ASIA-PACIFIC CONFERENCE ON COMPUTER SCIENCE AND DATA ENGINEERING (CSDE), 2021,
  • [25] Activity Recognition System for Dementia in Smart Homes based on Wearable Sensor Data
    Su, Chun-Fang
    Fu, Li-Chen
    Chien, Yi-Wei
    Li, Ting-Ying
    2018 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI), 2018, : 463 - 469
  • [26] Multi-modal generative adversarial networks for traffic event detection in smart cities
    Chen, Qi
    Wang, Wei
    Huang, Kaizhu
    De, Suparna
    Coenen, Frans
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 177
  • [27] Multi-View and Multi-Modal Action Recognition with Learned Fusion
    Ardianto, Sandy
    Hang, Hsueh-Ming
    2018 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2018, : 1601 - 1604
  • [28] Enhanced Aiot Multi-Modal Fusion for Human Activity Recognition in Ambient Assisted Living Environment
    Patel, Ankit D.
    Jhaveri, Rutvij H.
    Patel, Ashish D.
    Shah, Kaushal A.
    Shah, Jigarkumar
    SOFTWARE-PRACTICE & EXPERIENCE, 2025, 55 (04) : 731 - 747
  • [29] Ticino: A multi-modal remote sensing dataset for semantic segmentation
    Barbato, Mirko Paolo
    Piccoli, Flavio
    Napoletano, Paolo
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 249
  • [30] InstaIndoor and multi-modal deep learning for indoor scene recognition
    Andreea Glavan
    Estefanía Talavera
    Neural Computing and Applications, 2022, 34 : 6861 - 6877