Perception of Tactile Directionality via Artificial Fingerpad Deformation and Convolutional Neural Networks

被引:6
作者
Gutierrez, Kenneth [1 ]
Santos, Veronica J. [2 ]
机构
[1] Univ Calif Los Angeles, Los Angeles, CA 90095 USA
[2] Univ Calif Los Angeles, Mech & Aerosp Engn Dept, Los Angeles, CA 90095 USA
基金
美国国家科学基金会;
关键词
Perturbation methods; Tactile sensors; Force; Strain; Electrodes; Convolutional neural networks; manipulation; robot; skin displacement; tactile directionality; tactile images; tactile perception; tactile sensors; THUMB RESPONSES; OBJECT HELD; GRIP FORCE; REPRESENTATION; AFFERENTS; RESTRAINT; TEXTURES; SIGNALS; SLIP; SKIN;
D O I
10.1109/TOH.2020.2975555
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Humans can perceive tactile directionality with angular perception thresholds of 14-40 degrees via fingerpad skin displacement. Using deformable, artificial tactile sensors, the ability to perceive tactile directionality was developed for a robotic system to aid in object manipulation tasks. Two convolutional neural networks (CNNs) were trained on tactile images created from fingerpad deformation measurements during perturbations to a handheld object. A primary CNN regression model provided a point estimate of tactile directionality over a range of grip forces, perturbation angles, and perturbation speeds. A secondary CNN model provided a variance estimate that was used to determine uncertainty about the point estimate. A 5-fold cross-validation was performed to evaluate model performance. The primary CNN produced tactile directionality point estimates with an error rate of 4.3% for a 20 degrees angular resolution and was benchmarked against an open-source force estimation network. The model was implemented in real-time for interactions with an external agent and the environment with different object shapes and widths. The perception of tactile directionality could be used to enhance the situational awareness of human operators of telerobotic systems and to develop decision-making algorithms for context-appropriate responses by semi-autonomous robots.
引用
收藏
页码:831 / 839
页数:9
相关论文
共 50 条
[21]   Pruning convolutional neural networks via filter similarity analysis [J].
Lili Geng ;
Baoning Niu .
Machine Learning, 2022, 111 :3161-3180
[22]   Intelligent Fault Detection via Dilated Convolutional Neural Networks [J].
Khan, Mohammad Azam ;
Kim, Yong-Hwa ;
Choo, Jaegul .
2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP), 2018, :729-731
[23]   Learning Deep Graph Representations via Convolutional Neural Networks [J].
Ye, Wei ;
Askarisichani, Omid ;
Jones, Alex ;
Singh, Ambuj .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (05) :2268-2279
[24]   Accurate Mouth State Estimation via Convolutional Neural Networks [J].
Cao, Jie ;
Li, Haiqing ;
Sun, Zhenan ;
Lie, Ran .
2016 IEEE INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP), 2016, :134-138
[25]   Understanding convolutional neural networks via discriminant feature analysis [J].
Xu, Hao ;
Chen, Yueru ;
Lin, Ruiyuan ;
Kuo, C. -C. Jay .
APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2018, 7
[26]   Pruning convolutional neural networks via filter similarity analysis [J].
Geng, Lili ;
Niu, Baoning .
MACHINE LEARNING, 2022, 111 (09) :3161-3180
[27]   Comprehensible Convolutional Neural Networks via Guided Concept Learning [J].
Wickramanayake, Sandareka ;
Hsu, Wynne ;
Lee, Mong Li .
2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
[28]   Improving Performance of Convolutional Neural Networks via Feature Embedding [J].
Ghoshal, Torumoy ;
Zhang, Silu ;
Dang, Xin ;
Wilkins, Dawn ;
Chen, Yixin .
PROCEEDINGS OF THE 2019 ANNUAL ACM SOUTHEAST CONFERENCE (ACMSE 2019), 2019, :31-38
[29]   Fully Automatic Karyotyping via Deep Convolutional Neural Networks [J].
Wang, Chengyu ;
Yu, Limin ;
Su, Jionglong ;
Shen, Juming ;
Selis, Valerio ;
Yang, Chunxiao ;
Ma, Fei .
IEEE ACCESS, 2024, 12 :46081-46092
[30]   Learning to rank influential nodes in complex networks via convolutional neural networks [J].
Ahmad, Waseem ;
Wang, Bang ;
Chen, Si .
APPLIED INTELLIGENCE, 2024, 54 (04) :3260-3278