Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition

被引:8
|
作者
Rouhafzay, Ghazal [1 ]
Cretu, Ana-Maria [2 ]
Payeur, Pierre [1 ]
机构
[1] Univ Ottawa, Sch Elect Engn & Comp Sci, Ottawa, ON K1N 6N5, Canada
[2] Univ Quebec Outaouais, Dept Comp Sci & Engn, Gatineau, PQ J8X 3X7, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
3D object recognition; transfer learning; machine intelligence; convolutional neural networks; tactile sensors; force-sensing resistor; Barrett Hand;
D O I
10.3390/s21010113
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Transfer of learning or leveraging a pre-trained network and fine-tuning it to perform new tasks has been successfully applied in a variety of machine intelligence fields, including computer vision, natural language processing and audio/speech recognition. Drawing inspiration from neuroscience research that suggests that both visual and tactile stimuli rouse similar neural networks in the human brain, in this work, we explore the idea of transferring learning from vision to touch in the context of 3D object recognition. In particular, deep convolutional neural networks (CNN) pre-trained on visual images are adapted and evaluated for the classification of tactile data sets. To do so, we ran experiments with five different pre-trained CNN architectures and on five different datasets acquired with different technologies of tactile sensors including BathTip, Gelsight, force-sensing resistor (FSR) array, a high-resolution virtual FSR sensor, and tactile sensors on the Barrett robotic hand. The results obtained confirm the transferability of learning from vision to touch to interpret 3D models. Due to its higher resolution, tactile data from optical tactile sensors was demonstrated to achieve higher classification rates based on visual features compared to other technologies relying on pressure measurements. Further analysis of the weight updates in the convolutional layer is performed to measure the similarity between visual and tactile features for each technology of tactile sensing. Comparing the weight updates in different convolutional layers suggests that by updating a few convolutional layers of a pre-trained CNN on visual data, it can be efficiently used to classify tactile data. Accordingly, we propose a hybrid architecture performing both visual and tactile 3D object recognition with a MobileNetV2 backbone. MobileNetV2 is chosen due to its smaller size and thus its capability to be implemented on mobile devices, such that the network can classify both visual and tactile data. An accuracy of 100% for visual and 77.63% for tactile data are achieved by the proposed architecture.
引用
收藏
页码:1 / 15
页数:15
相关论文
共 50 条
  • [31] Texture Image Recognition Based on Deep Convolutional Neural Network and Transfer Learning
    Wang J.
    Fan Y.
    Li Z.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2022, 34 (05): : 701 - 710
  • [32] Recognition of Social Touch Gestures Using 3D Convolutional Neural Networks
    Zhou, Nan
    Du, Jun
    PATTERN RECOGNITION (CCPR 2016), PT I, 2016, 662 : 164 - 173
  • [33] Tactile Transfer Learning and Object Recognition With a Multifingered Hand Using Morphology Specific Convolutional Neural Networks
    Funabashi, Satoshi
    Yan, Gang
    Fei, Hongyi
    Schmitz, Alexander
    Jamone, Lorenzo
    Ogata, Tetsuya
    Sugano, Shigeki
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (06) : 7587 - 7601
  • [34] Transfer Learning with deep Convolutional Neural Network for Underwater Live Fish Recognition
    Ben Tamou, Abdelouahid
    Benzinou, Abdesslam
    Nasreddine, Kamal
    Ballihi, Lahoucine
    2018 IEEE THIRD INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, APPLICATIONS AND SYSTEMS (IPAS), 2018, : 204 - 209
  • [35] 3D Convolutional Neural Networks for Soccer Object Motion Recognition
    Lee, Jiwon
    Kim, Yoonhyung
    Jeong, Minki
    Kim, Changick
    Nam, Do-Won
    Lee, JungSoo
    Moon, Sungwon
    Yoo, WonYoung
    2018 20TH INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT), 2018, : 354 - 358
  • [36] Human Action Recognition with 3D Convolutional Neural Network
    Lima, Tiago
    Fernandes, Bruno
    Barros, Pablo
    2017 IEEE LATIN AMERICAN CONFERENCE ON COMPUTATIONAL INTELLIGENCE (LA-CCI), 2017,
  • [37] Multi-view convolutional vision transformer for 3D object recognition
    Li, Jie
    Liu, Zhao
    Li, Li
    Lin, Junqin
    Yao, Jian
    Tu, Jingmin
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2023, 95
  • [38] 3D Anisotropic Convolutional Neural Network with Step Transfer Learning for Liver Segmentation
    Pan, Xiaoying
    Zhang, Zhe
    Zhang, Yuping
    PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON COMMUNICATION AND INFORMATION PROCESSING (ICCIP 2018), 2018, : 86 - 90
  • [39] Performance analysis of hybrid deep learning framework using a vision transformer and convolutional neural network for handwritten digit recognition
    Agrawal, Vanita
    Jagtap, Jayant
    Patil, Shruti
    Kotecha, Ketan
    METHODSX, 2024, 12
  • [40] Biologically Inspired Vision and Touch Sensing to Optimize 3D Object Representation and Recognition
    Rouhafzay G.
    Cretu A.-M.
    Payeur P.
    IEEE Instrumentation and Measurement Magazine, 2021, 24 (03): : 85 - 90