Human Activity Recognition with IMU and Vital Signs Feature Fusion

被引:0
作者
Xefteris, Vasileios-Rafail [1 ]
Tsanousa, Athina [1 ]
Mavropoulos, Thanassis [1 ]
Meditskos, Georgios [2 ]
Vrochidis, Stefanos [1 ]
Kompatsiaris, Ioannis [1 ]
机构
[1] Informat Technol Inst, Ctr Res & Technol Hellas, 6th Km Charilaou Thermi, Thessaloniki 57001, Greece
[2] Aristotle Univ Thessaloniki, Sch Informat, Thessaloniki, Greece
来源
MULTIMEDIA MODELING (MMM 2022), PT I | 2022年 / 13141卷
基金
欧盟地平线“2020”;
关键词
Human activity recognition; Wearable sensors; Vital signals; Sensor fusion; Feature selection;
D O I
10.1007/978-3-030-98358-1_23
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Combining data from different sources into an integrated view is a recent trend taking advantage of the Internet of Things (IoT) evolution over the last years. The fusion of different modalities has applications in various fields, including healthcare and security systems. Human activity recognition (HAR) is among the most common applications of a healthcare or eldercare system. Inertial measurement unit (IMU) wearable sensors, like accelerometers and gyroscopes, are often utilized for HAR applications. In this paper, we investigate the performance of wearable IMU sensors along with vital signs sensors for HAR. A massive feature extraction, including both time and frequency domain features and transitional features for the vital signs, along with a feature selection method were performed. The classification algorithms and different early and late fusion methods were applied to a public dataset. Experimental results revealed that both IMU and vital signs achieve reasonable HAR accuracy and Fl-score among all the classes. Feature selection significantly reduced the number of features from both IMU and vital signs features while also improved the classification accuracy. The rest of the early and late level fusion methods also performed better than each modality alone, reaching an accuracy level of up to 95.32%.
引用
收藏
页码:287 / 298
页数:12
相关论文
共 21 条
  • [1] Improving Human Activity Recognition Performance by Data Fusion and Feature Engineering
    Chen, Jingcheng
    Sun, Yining
    Sun, Shaoming
    [J]. SENSORS, 2021, 21 (03) : 1 - 23
  • [2] Deep learning based multimodal complex human activity recognition using wearable devices
    Chen, Ling
    Liu, Xiaoze
    Peng, Liangying
    Wu, Menghan
    [J]. APPLIED INTELLIGENCE, 2021, 51 (06) : 4029 - 4042
  • [3] A Survey on Activity Detection and Classification Using Wearable Sensors
    Cornacchia, Maria
    Ozcan, Koray
    Zheng, Yu
    Velipasalar, Senem
    [J]. IEEE SENSORS JOURNAL, 2017, 17 (02) : 386 - 403
  • [4] Doewes Afrizal, 2017, 2017 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-TW), P171, DOI 10.1109/ICCE-China.2017.7991050
  • [5] Multi-input CNN-GRU based human activity recognition using wearable sensors
    Dua, Nidhi
    Singh, Shiva Nand
    Semwal, Vijay Bhaskar
    [J]. COMPUTING, 2021, 103 (07) : 1461 - 1478
  • [6] Feature Representation and Data Augmentation for Human Activity Classification Based on Wearable IMU Sensor Data Using a Deep LSTM Neural Network
    Eyobu, Odongo Steven
    Han, Dong Seog
    [J]. SENSORS, 2018, 18 (09)
  • [7] Giannakeris Panagiotis, 2021, MultiMedia Modeling. 27th International Conference, MMM 2021. Proceedings. Lecture Notes in Computer Science (LNCS 12573), P367, DOI 10.1007/978-3-030-67835-7_31
  • [8] Modality-wise relational reasoning for one-shot sensor-based activity recognition
    Kasnesis, Panagiotis
    Chatzigeorgiou, Christos
    Patrikakis, Charalampos Z.
    Rangoussi, Maria
    [J]. PATTERN RECOGNITION LETTERS, 2021, 146 : 90 - 99
  • [9] A Survey on Human Activity Recognition using Wearable Sensors
    Lara, Oscar D.
    Labrador, Miguel A.
    [J]. IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2013, 15 (03): : 1192 - 1209
  • [10] Centinela: A human activity recognition system based on acceleration and vital sign data
    Lara, Oscar D.
    Perez, Alfredo J.
    Labrador, Miguel A.
    Posada, Jose D.
    [J]. PERVASIVE AND MOBILE COMPUTING, 2012, 8 (05) : 717 - 729