Human activity recognition based on fusing inertial sensors with an optical receiver

被引:0
|
作者
Salem, Ziad [1 ]
Lichtenegger, Felix [2 ]
Weiss, Andreas P. [1 ]
Leiner, Claude [2 ]
Sommer, Christian [2 ]
Wenzl, Franz P. [1 ]
机构
[1] Joanneum Res Forsch mbH, Inst Surface Technol & Photon, Smart Connected Lighting, Ind Str 6, A-7423 Pinkafeld, Austria
[2] Joanneum Res Forschungsges mbH, Inst Surface Technol & Photon, Light & Opt Technol, Franz Pichler Str 30, A-8160 Weiz, Austria
来源
OPTICAL SENSING AND DETECTION VII | 2022年 / 12139卷
关键词
Human activity recognition; inertial measurement unit sensors; optical receiver; optical simulation; sensors data fusion; machine learning;
D O I
10.1117/12.2621187
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The research on Human Activity Recognition (HAR) systems has received high attention due to its importance in high demanding and challenging fields of study such as health care, social science, robotics and artificial intelligence. One of the most prominent approaches is to use Inertial Measurement Unit (IMU) sensors in order to determine what activity a human is making. If complex activities such as sit-down, stand-up, walk-up and walk-down are needed to be recognized, the user needs to wear multiple sensors on his/her body to perform a correct recognition. Such activity recognition will be of high interest if the object's position is also recognized. For recognizing the activity and location properly, a decent fusion technique between the multiple sources of information is required. In this study, we propose a novel positioning and HAR system based on fusing data from a single IMU device with data from a simulated segmented optical receiver to perform visible light positioning (VLP). We combine real world data collected from the IMU device with optical simulation data generated from a simulated segmented optical receiver in order to distinguish between various complex activities, particularly walk, walk-up and walk-down in addition to determining the position of where the activity is performed. The fusion mechanism does not only improve the accuracy of the activity recognition in comparison to utilizing either IMU or optical data alone, but also enables the system to furthermore retrieve the user's position in the room. By applying different Machine-learning (ML) algorithms for the assessment of the achievable results, we conduct a comprehensive analysis on which ML method is suitable for our envisioned low-complex HAR and positioning system, which avoids the placement of multiple sensors on the user's body. Our results show the influence of different segmentation strategies for the novel concept of a segmented optical receiver in combination with an IMU sensor on the accuracy of the activity and position recognition.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Physical Activity Recognition Based on Deep Learning Using Photoplethysmography and Wearable Inertial Sensors
    Hnoohom, Narit
    Mekruksavanich, Sakorn
    Jitpattanakul, Anuchit
    ELECTRONICS, 2023, 12 (03)
  • [42] Multi-Branch Attention-Based Grouped Convolution Network for Human Activity Recognition Using Inertial Sensors
    Li, Yong
    Wang, Luping
    Liu, Fen
    ELECTRONICS, 2022, 11 (16)
  • [43] Fusing Visual and Inertial Sensors with Semantics for 3D Human Pose Estimation
    Gilbert, Andrew
    Trumble, Matthew
    Malleson, Charles
    Hilton, Adrian
    Collomosse, John
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2019, 127 (04) : 381 - 397
  • [44] A multisensor multiclassifier hierarchical fusion model based on entropy weight for human activity recognition using wearable inertial sensors
    Guo M.
    Wang Z.
    Yang N.
    Li Z.
    An T.
    IEEE Transactions on Human-Machine Systems, 2019, 49 (01): : 105 - 111
  • [45] Fusing Visual and Inertial Sensors with Semantics for 3D Human Pose Estimation
    Andrew Gilbert
    Matthew Trumble
    Charles Malleson
    Adrian Hilton
    John Collomosse
    International Journal of Computer Vision, 2019, 127 : 381 - 397
  • [46] Fusing Handcrafted and Contextual Features for Human Activity Recognition
    Vernikos, Ioannis
    Mathe, Eirini
    Spyrou, Evaggelos
    Mitsou, Alexandros
    Giannakopoulos, Theodore
    Mylonas, Phivos
    2019 14TH INTERNATIONAL WORKSHOP ON SEMANTIC AND SOCIAL MEDIA ADAPTATION AND PERSONALIZATION (SMAP), 2019, : 36 - 41
  • [47] Activity Recognition from Inertial Sensors with Convolutional Neural Networks
    Quang-Do Ha
    Minh-Triet Tran
    FUTURE DATA AND SECURITY ENGINEERING, 2017, 10646 : 285 - 298
  • [48] Considerations for the Design of an Activity Recognition System Using Inertial Sensors
    Kyritsis, Athanasios I.
    Deriaz, Michel
    Konstantas, Dimitri
    2018 IEEE 20TH INTERNATIONAL CONFERENCE ON E-HEALTH NETWORKING, APPLICATIONS AND SERVICES (HEALTHCOM), 2018,
  • [49] Human Activity Recognition for Emergency First Responders via Body-Worn Inertial Sensors
    Scheurer, Sebastian
    Tedesco, Salvatore
    Browns, Kenneth N.
    O'Hynn, Brendan
    2017 IEEE 14TH INTERNATIONAL CONFERENCE ON WEARABLE AND IMPLANTABLE BODY SENSOR NETWORKS (BSN), 2017, : 5 - 8
  • [50] Human activity recognition based on the inertial information and convolutional neural network
    Li X.
    Liu X.
    Li Y.
    Cao H.
    Chen Y.
    Lin Y.
    Huang X.
    Shengwu Yixue Gongchengxue Zazhi/Journal of Biomedical Engineering, 2020, 37 (04): : 596 - 601