Fusing Object Information and Inertial Data for Activity Recognition

被引:6
作者
Diete, Alexander [1 ]
Stuckenschmidt, Heiner [1 ]
机构
[1] Univ Mannheim, Data & Web Sci Grp, D-68159 Mannheim, Germany
关键词
activity recognition; machine learning; multi-modality; VISION; PREVENTION;
D O I
10.3390/s19194119
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
In the field of pervasive computing, wearable devices have been widely used for recognizing human activities. One important area in this research is the recognition of activities of daily living where especially inertial sensors and interaction sensors (like RFID tags with scanners) are popular choices as data sources. Using interaction sensors, however, has one drawback: they may not differentiate between proper interaction and simple touching of an object. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g., when an object is only touched but no interaction occurred afterwards. There are, however, many scenarios like medicine intake that rely heavily on correctly recognized activities. In our work, we aim to address this limitation and present a multimodal egocentric-based activity recognition approach. Our solution relies on object detection that recognizes activity-critical objects in a frame. As it is infeasible to always expect a high quality camera view, we enrich the vision features with inertial sensor data that monitors the users' arm movement. This way we try to overcome the drawbacks of each respective sensor. We present our results of combining inertial and video features to recognize human activities on different types of scenarios where we achieve an F-1-measure of up to 79.6%.
引用
收藏
页数:22
相关论文
共 43 条
[1]   Hierarchical modeling for first-person vision activity recognition [J].
Abebe, Girmaw ;
Cavallaro, Andrea .
NEUROCOMPUTING, 2017, 267 :362-377
[2]  
Allin S., 2003, P 2003 INT C UB COMP
[3]  
[Anonymous], 2008, Guide to the carnegie mellon university multimodal activity (cmu-mmac) database
[4]  
[Anonymous], 2018, P EUR C COMP VIS ECC
[5]  
[Anonymous], 2017, PROC IEEE C COMPUT V
[6]  
[Anonymous], 2017, ARXIV170707012
[7]   Multi-rate fusion with vision and inertial sensors [J].
Armesto, L ;
Chroust, S ;
Vincze, M ;
Tornero, J .
2004 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1- 5, PROCEEDINGS, 2004, :193-199
[8]   The Evolution of First Person Vision Methods: A Survey [J].
Betancourt, Alejandro ;
Morerio, Pietro ;
Regazzoni, Carlo S. ;
Rauterberg, Matthias .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2015, 25 (05) :744-760
[9]  
Chen C, 2015, IEEE IMAGE PROC, P168, DOI 10.1109/ICIP.2015.7350781
[10]   Survey on Fall Detection and Fall Prevention Using Wearable and External Sensors [J].
Delahoz, Yueng Santiago ;
Labrador, Miguel Angel .
SENSORS, 2014, 14 (10) :19806-19842