Approaching the Real-World: Supporting Activity Recognition Training with Virtual IMU Data

被引:22
作者
Kwon, Hyeokhyen [1 ]
Wang, Bingyao [2 ]
Abowd, Gregory D. [3 ]
Ploetz, Thomas [1 ]
机构
[1] Georgia Inst Technol, Sch Interact Comp, Atlanta, GA 30332 USA
[2] Georgia Inst Technol, Coll Comp, Atlanta, GA 30332 USA
[3] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02115 USA
来源
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT | 2021年 / 5卷 / 03期
关键词
Activity Recognition; Data Collection; Machine Learning;
D O I
10.1145/3478096
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, IMUTube introduced a paradigm change for bootstrapping human activity recognition (HAR) systems for wearables. The key idea is to utilize videos of activities to support training activity recognizers based on inertial measurement units (IMUs). This system retrieves video from public repositories and subsequently generates virtual IMU data from this. The ultimate vision for such a system is to make large amounts of weakly labeled videos accessible for model training in HAR and, as such, to overcome one of the most pressing issues in the field: the lack of significant amounts of labeled sample data. In this paper we present the first in-detail exploration of IMUTube in a realistic assessment scenario: the analysis of free-weight gym exercises. We make significant progress towards a flexible, fully-functional IMUTube system by extending it such that it can handle a range of artifacts that are common in unrestricted online videos, including various forms of video noise, non-human poses, body part occlusions, and extreme camera and human motion. By overcoming these real-world challenges, we are able to generate high-quality virtual IMU data, which allows us to employ IMUTube for practical analysis tasks. We show that HAR systems trained by incorporating virtual sensor data generated by IMUTube significantly outperform baseline models trained only with real IMU data. In doing so we demonstrate the practical utility of IMUTube and the progress made towards the final vision of the new bootstrapping paradigm.
引用
收藏
页数:32
相关论文
共 119 条
  • [1] Wearable Assistant for Parkinson's Disease Patients With the Freezing of Gait Symptom
    Baechlin, Marc
    Plotnik, Meir
    Roggen, Daniel
    Maidan, Inbal
    Hausdorff, Jeffrey M.
    Giladi, Nir
    Troester, Gerhard
    [J]. IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE, 2010, 14 (02): : 436 - 446
  • [2] CID: an efficient complexity-invariant distance for time series
    Batista, Gustavo E. A. P. A.
    Keogh, Eamonn J.
    Tataw, Oben Moses
    de Souza, Vinicius M. A.
    [J]. DATA MINING AND KNOWLEDGE DISCOVERY, 2014, 28 (03) : 634 - 669
  • [3] Bewley A, 2016, IEEE IMAGE PROC, P3464, DOI 10.1109/ICIP.2016.7533003
  • [4] Blender Online Community, 2018, BlenderA 3D modelling and rendering package
  • [5] Bogdan, 2018, P 15 ACM SIGGRAPH EU, P1
  • [6] A Tutorial on Human Activity Recognition Using Body-Worn Inertial Sensors
    Bulling, Andreas
    Blanke, Ulf
    Schiele, Bernt
    [J]. ACM COMPUTING SURVEYS, 2014, 46 (03)
  • [7] OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields
    Cao, Zhe
    Hidalgo, Gines
    Simon, Tomas
    Wei, Shih-En
    Sheikh, Yaser
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (01) : 172 - 186
  • [8] Carnegie Mellon Graphics Lab, 2008, CARN MELL MOT CAPT D
  • [9] Human Pose Estimation with Iterative Error Feedback
    Carreira, Joao
    Agrawal, Pulkit
    Fragkiadaki, Katerina
    Malik, Jitendra
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 4733 - 4742
  • [10] Chan Larry, 2018, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, V2, DOI 10.1145/3191735