Multi-modal lifelog data fusion for improved human activity recognition: A hybrid approach

被引:0
|
作者
Oh, Yongkyung [1 ]
Kim, Sungil [1 ,2 ]
机构
[1] Ulsan Natl Inst Sci & Technol UNIST, Dept Ind Engn, Ulsan, South Korea
[2] Ulsan Natl Inst Sci & Technol UNIST, Artificial Intelligence Grad Sch, Ulsan, South Korea
关键词
Multi-modal data; Data fusion strategy; Hybrid approach; Human activity recognition; OF-THE-ART; INFORMATION FUSION; DEEP; CHALLENGES;
D O I
10.1016/j.inffus.2024.102464
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The rapid growth of lifelog data, collected through smartphones and wearable devices, has driven the need for better Human Activity Recognition (HAR) solutions. However, lifelog data is complex and challenging to analyze due to its diverse sources of information. In response, we introduce an innovative hybrid data fusion framework for HAR. This framework comprises three key elements: a hybrid fusion mechanism, an attentionbased classifier, and an ensemble -based recognition approach. Our hybrid fusion mechanism expertly combines the advantages of late and intermediate fusion, enhancing classification performance and improving the network's ability to learn connections between different data modalities. Additionally, our solution incorporates an attention -based classifier and an ensemble approach, ensuring robust and consistent performance in real -world scenarios. We evaluated our method across multiple public lifelog datasets, demonstrating that our hybrid fusion approach consistently surpasses existing fusion strategies in HAR, promising significant advancements in activity recognition.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Hybrid Multi-modal Fusion for Human Action Recognition
    Seddik, Bassem
    Gazzah, Sami
    Ben Amara, Najoua Essoukri
    IMAGE ANALYSIS AND RECOGNITION, ICIAR 2017, 2017, 10317 : 201 - 209
  • [2] Human activity recognition based on multi-modal fusion
    Zhang, Cheng
    Zu, Tianqi
    Hou, Yibin
    He, Jian
    Yang, Shengqi
    Dong, Ruihai
    CCF TRANSACTIONS ON PERVASIVE COMPUTING AND INTERACTION, 2023, 5 (03) : 321 - 332
  • [3] Human activity recognition based on multi-modal fusion
    Cheng Zhang
    Tianqi Zu
    Yibin Hou
    Jian He
    Shengqi Yang
    Ruihai Dong
    CCF Transactions on Pervasive Computing and Interaction, 2023, 5 : 321 - 332
  • [4] A Human Activity Recognition-Aware Framework Using Multi-modal Sensor Data Fusion
    Kwon, Eunjung
    Park, Hyunho
    Byon, Sungwon
    Jung, Eui-Suk
    Lee, Yong-Tae
    2018 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE), 2018,
  • [5] Multi-modal hybrid hierarchical classification approach with transformers to enhance complex human activity recognition
    Ezzeldin, Mustafa
    Ghoneim, Amr S.
    Abdelhamid, Laila
    Atia, Ayman
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (12) : 9375 - 9385
  • [6] Multi-modal Sensing for Human Activity Recognition
    Bruno, Barbara
    Grosinger, Jasmin
    Mastrogiovanni, Fulvio
    Pecora, Federico
    Saffiotti, Alessandro
    Sathyakeerthy, Subhash
    Sgorbissa, Antonio
    2015 24TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2015, : 594 - 600
  • [7] Interpretable Passive Multi-Modal Sensor Fusion for Human Identification and Activity Recognition
    Yuan, Liangqi
    Andrews, Jack
    Mu, Huaizheng
    Vakil, Asad
    Ewing, Robert
    Blasch, Erik
    Li, Jia
    SENSORS, 2022, 22 (15)
  • [8] Human Behavior Recognition Algorithm Based on Multi-Modal Sensor Data Fusion
    Zheng, Dingchao
    Chen, Caiwei
    Yu, Jianzhe
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2025, 29 (02) : 287 - 305
  • [9] Rethinking Fusion Baselines for Multi-modal Human Action Recognition
    Jiang, Hongda
    Li, Yanghao
    Song, Sijie
    Liu, Jiaying
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING, PT III, 2018, 11166 : 178 - 187
  • [10] Heterogeneous Multi-Modal Sensor Fusion with Hybrid Attention for Exercise Recognition
    Wijekoon, Anjana
    Wiratunga, Nirmalie
    Cooper, Kay
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,