Human Grasp Feature Learning and Object Recognition Based on Multi-sensor Information Fusion

被引:0
作者
Zhang Y. [1 ]
Huang Y. [1 ,2 ]
Liu Y. [1 ]
Liu C. [1 ]
Liu P. [1 ]
Zhang Y. [1 ]
机构
[1] School of Electronic Science & Applied Physics, Hefei University of Technology, Hefei
[2] The State Key Laboratory of Bioelectronics, Southeast University, Nanjing
来源
Jiqiren/Robot | 2020年 / 42卷 / 03期
关键词
Convolutional neural network; Flexible wearable sensor; Grasp feature; Multi-modal information fusion; Object recognition;
D O I
10.13973/j.cnki.robot.190353
中图分类号
学科分类号
摘要
Human grasping feature learning and object recognition are studied based on flexible wearable sensors and multi-modal information fusion, and the application of perceptual information to the human grasping process is explored. A data glove is built by utilizing 10 strain sensors, 14 temperature sensors and 78 pressure sensors, and is put on the human hand to measure the bending angle of the finger joints, as well as the temperature and pressure distribution information of the grasped object in human grasping behaviors. The cross-modal information representation is established on time and space sequences, and the multi-modal information is fused by deep convolution neural network to construct the learning model of human grasping feature and realize the accurate recognition of the grasped object. Relevant experiments and validity analysis are carried out for joint angle feature, temperature feature and pressure information feature respectively. The results show that the accurate recognition of 18 kinds of objects can be realized by multi-modal information fusion of multiple sensors. © 2020, Science Press. All right reserved.
引用
收藏
页码:267 / 277
页数:10
相关论文
共 33 条
[1]  
Liu P., Glas D.F., Kanda T., Et al., Data-driven HRI: Learning social behaviors by example from human-human interaction, IEEE Transactions on Robotics, 32, 4, pp. 988-1008, (2016)
[2]  
Thepsoonthorn C., Ogawa K., Miyake Y., Does robot's humanbased gaze and head nodding behavior really win over nonhuman-based behavior in human-robot interaction?, Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 301-302, (2017)
[3]  
Yao B., Zhou Z., Wang L., Et al., Sensorless and adaptive admittance control of industrial robot in physical human-robot interaction, Robotics and Computer-Integrated Manufacturing, 51, pp. 158-168, (2018)
[4]  
Sato E., Yamaguchi T., Harashima F., Natural interface using pointing behavior for human-robot gestural interaction, IEEE Transactions on Industrial Electronics, 54, 2, pp. 1105-1112, (2007)
[5]  
Zhang T., Jiang L., Liu H., A novel grasping force control strategy for multi-fingered prosthetic hand, Journal of Central South University of Technology, 19, 6, pp. 1537-1542, (2012)
[6]  
Lisetti C.L., Brown S.M., Alvarez K., Et al., A social informatics approach to human-robot interaction with a service social robot, IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 34, 2, pp. 195-209, (2004)
[7]  
Li S., Zhang X., Implicit intention communication in humanrobot interaction through visual behavior studies, IEEE Transactions on Human-Machine Systems, 47, 4, pp. 437-448, (2017)
[8]  
Villafuerte Segura R., Dominguez Ramirez O.A., Gonzalez Hernandez O., Et al., A simple implementation of an intelligent adaptive control systems for human-robot interaction, IEEE Latin America Transactions, 14, 1, pp. 20-31, (2016)
[9]  
Chen G., Bing Z., Rohrbein F., Et al., Toward brain-inspired learning with the neuromorphic snake-like robot and the neurorobotic platform, IEEE Transactions on Cognitive and Developmental Systems, 11, 1, pp. 1-12, (2017)
[10]  
Huang X., Zhang F., Li H., Et al., An online technology for measuring icing shape on conductor based on vision and force sensors, IEEE Transactions on Instrumentation and Measurement, 66, 12, pp. 3180-3189, (2017)