Sensor fusion based manipulative action recognition

被引:5
作者
Gu, Ye [1 ]
Liu, Meiqin [2 ]
Sheng, Weihua [3 ]
Ou, Yongsheng [4 ]
Li, Yongqiang [5 ]
机构
[1] Shenzhen Technol Univ, Shenzhen, Guangdong, Peoples R China
[2] Zhejiang Univ, Coll Elect Engn, Hangzhou 310027, Peoples R China
[3] Shenzhen Acad Robot, Shenzhen, Guangdong, Peoples R China
[4] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen 518055, Guangdong, Peoples R China
[5] Harbin Inst Technol, 92 Xidazhi St, Harbin 150001, Peoples R China
基金
中国国家自然科学基金;
关键词
Bayesian networks;
D O I
10.1007/s10514-020-09943-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Manipulative action recognition is one of the most important and challenging topic in the fields of image processing. In this paper, three kinds of sensor modules are used for motion, force and object information capture in the manipulative actions. Two fusion methods are proposed. Further, the recognition accuracy can be improved by using object as context. For the feature-level fusion method, significant features are chosen first. Then the Hidden Markov Models are built with these selected features to characterize the temporal sequence. For the decision-level fusion method, HMMs are built for each feature group. Then the decisions are fused. On top of these two fusion methods, the object/action context is modeled using Bayesian network. Assembly tasks are used for algorithm evaluation. The experimental results prove that the proposed approach is effective on manipulative action recognition task. The recognition accuracy of the decision-level, feature-level fusion methods and the Bayesian model are 72%, 80% and 90% respectively.
引用
收藏
页码:1 / 13
页数:13
相关论文
共 47 条
[1]  
Ahmad M, 2006, INT C PATT RECOG, P263
[2]   Tutorial Point Cloud Library Three-Dimensional Object Recognition and 6 DOF Pose Estimation [J].
Aldoma, Aitor ;
Marton, Zoltan-Csaba ;
Tombari, Federico ;
Wohlkinger, Walter ;
Potthast, Christian ;
Zeisl, Bernhard ;
Rusu, Radu Bogdan ;
Gedikli, Suat ;
Vincze, Markus .
IEEE ROBOTICS & AUTOMATION MAGAZINE, 2012, 19 (03) :80-91
[3]  
Alhamzi K., 2015, International Journal of Advancements in Computing Technology, V7, P43
[4]  
[Anonymous], 2016, ARXIV161204520
[5]  
Bo R, 2012, TRANS DISTRIB CONF
[6]  
Bux A, 2016, VISION BASED HUMAN A
[7]  
Cai YP, 2016, IEEE GLOBE WORK
[8]   A survey of depth and inertial sensor fusion for human action recognition [J].
Chen, Chen ;
Jafari, Roozbeh ;
Kehtarnavaz, Nasser .
MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (03) :4405-4425
[9]   A practical multi-sensor activity recognition system for home-based care [J].
Chernbumroong, Saisakul ;
Cang, Shuang ;
Yu, Hongnian .
DECISION SUPPORT SYSTEMS, 2014, 66 :61-70
[10]  
Chu V, 2016, ACMIEEE INT CONF HUM, P221, DOI 10.1109/HRI.2016.7451755