Human activity recognition based on fusing inertial sensors with an optical receiver

被引:0
|
作者
Salem, Ziad [1 ]
Lichtenegger, Felix [2 ]
Weiss, Andreas P. [1 ]
Leiner, Claude [2 ]
Sommer, Christian [2 ]
Wenzl, Franz P. [1 ]
机构
[1] Joanneum Res Forsch mbH, Inst Surface Technol & Photon, Smart Connected Lighting, Ind Str 6, A-7423 Pinkafeld, Austria
[2] Joanneum Res Forschungsges mbH, Inst Surface Technol & Photon, Light & Opt Technol, Franz Pichler Str 30, A-8160 Weiz, Austria
来源
OPTICAL SENSING AND DETECTION VII | 2022年 / 12139卷
关键词
Human activity recognition; inertial measurement unit sensors; optical receiver; optical simulation; sensors data fusion; machine learning;
D O I
10.1117/12.2621187
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The research on Human Activity Recognition (HAR) systems has received high attention due to its importance in high demanding and challenging fields of study such as health care, social science, robotics and artificial intelligence. One of the most prominent approaches is to use Inertial Measurement Unit (IMU) sensors in order to determine what activity a human is making. If complex activities such as sit-down, stand-up, walk-up and walk-down are needed to be recognized, the user needs to wear multiple sensors on his/her body to perform a correct recognition. Such activity recognition will be of high interest if the object's position is also recognized. For recognizing the activity and location properly, a decent fusion technique between the multiple sources of information is required. In this study, we propose a novel positioning and HAR system based on fusing data from a single IMU device with data from a simulated segmented optical receiver to perform visible light positioning (VLP). We combine real world data collected from the IMU device with optical simulation data generated from a simulated segmented optical receiver in order to distinguish between various complex activities, particularly walk, walk-up and walk-down in addition to determining the position of where the activity is performed. The fusion mechanism does not only improve the accuracy of the activity recognition in comparison to utilizing either IMU or optical data alone, but also enables the system to furthermore retrieve the user's position in the room. By applying different Machine-learning (ML) algorithms for the assessment of the achievable results, we conduct a comprehensive analysis on which ML method is suitable for our envisioned low-complex HAR and positioning system, which avoids the placement of multiple sensors on the user's body. Our results show the influence of different segmentation strategies for the novel concept of a segmented optical receiver in combination with an IMU sensor on the accuracy of the activity and position recognition.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] HIERARCHICAL DEEP LEARNING MODEL WITH INERTIAL AND PHYSIOLOGICAL SENSORS FUSION FOR WEARABLE-BASED HUMAN ACTIVITY RECOGNITION
    Hwang, Dae Yon
    Ng, Pai Chet
    Yu, Yuanhao
    Wang, Yang
    Spachos, Petros
    Hatzinakos, Dimitrios
    Plataniotis, Konstantinos N.
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 21 - 25
  • [32] Boosting Inertial-Based Human Activity Recognition With Transformers
    Shavit, Yoli
    Klein, Itzik
    IEEE ACCESS, 2021, 9 : 53540 - 53547
  • [33] A Human Activity Recognition Method Based on a Single Inertial Sensor
    Fang, Y.
    Yu, Z. Z.
    Du, J. C.
    INTERNATIONAL CONFERENCE ON ADVANCED EDUCATIONAL TECHNOLOGY AND INFORMATION ENGINEERING (AETIE 2015), 2015, : 730 - 737
  • [34] Human Activity Recognition Method based on Inertial Sensor and Barometer
    Xie, Lili
    Tian, Jun
    Ding, Genming
    Zhao, Qian
    2018 5TH IEEE INTERNATIONAL SYMPOSIUM ON INERTIAL SENSORS & SYSTEMS (INERTIAL 2018), 2018, : 109 - 112
  • [35] Human Activity Recognition via Wi-Fi and Inertial Sensors With Machine Learning
    Guo, Wei
    Yamagishi, Shunsei
    Jing, Lei
    IEEE ACCESS, 2024, 12 : 18821 - 18836
  • [36] Human Activity Recognition with Smartphone Inertial Sensors using Bidir-LSTM Networks
    Yu, Shilong
    Qin, Long
    2018 3RD INTERNATIONAL CONFERENCE ON MECHANICAL, CONTROL AND COMPUTER ENGINEERING (ICMCCE), 2018, : 219 - 224
  • [37] Multimodal Human Activity Recognition From Wearable Inertial Sensors Using Machine Learning
    Badawi, Abeer A.
    Al-Kabbany, Ahmad
    Shaban, Heba
    2018 IEEE-EMBS CONFERENCE ON BIOMEDICAL ENGINEERING AND SCIENCES (IECBES), 2018, : 402 - 407
  • [38] Towards HMM based Human Motion Recognition using MEMS Inertial Sensors
    Shi, Guangyi
    Zou, Yuexian
    Jin, Yufeng
    Cui, Xiaole
    Li, Wen J.
    2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS, VOLS 1-4, 2009, : 1762 - +
  • [39] Context-based fall detection and activity recognition using inertial and location sensors
    Gjoreski, Hristijan
    Gams, Matjaz
    Lustrek, Mitja
    JOURNAL OF AMBIENT INTELLIGENCE AND SMART ENVIRONMENTS, 2014, 6 (04) : 419 - 433
  • [40] Deep Learning Models for Daily Living Activity Recognition based on Wearable Inertial Sensors
    Mekruksavanich, Sakorn
    Jantawong, Ponnipa
    Hnoohom, Narit
    Jitpattanakul, Anuchit
    2022 19TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER SCIENCE AND SOFTWARE ENGINEERING (JCSSE 2022), 2022,