Transfer of Learning from Vision to Touch: A Hybrid Deep Convolutional Neural Network for Visuo-Tactile 3D Object Recognition

被引:8
|
作者
Rouhafzay, Ghazal [1 ]
Cretu, Ana-Maria [2 ]
Payeur, Pierre [1 ]
机构
[1] Univ Ottawa, Sch Elect Engn & Comp Sci, Ottawa, ON K1N 6N5, Canada
[2] Univ Quebec Outaouais, Dept Comp Sci & Engn, Gatineau, PQ J8X 3X7, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
3D object recognition; transfer learning; machine intelligence; convolutional neural networks; tactile sensors; force-sensing resistor; Barrett Hand;
D O I
10.3390/s21010113
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Transfer of learning or leveraging a pre-trained network and fine-tuning it to perform new tasks has been successfully applied in a variety of machine intelligence fields, including computer vision, natural language processing and audio/speech recognition. Drawing inspiration from neuroscience research that suggests that both visual and tactile stimuli rouse similar neural networks in the human brain, in this work, we explore the idea of transferring learning from vision to touch in the context of 3D object recognition. In particular, deep convolutional neural networks (CNN) pre-trained on visual images are adapted and evaluated for the classification of tactile data sets. To do so, we ran experiments with five different pre-trained CNN architectures and on five different datasets acquired with different technologies of tactile sensors including BathTip, Gelsight, force-sensing resistor (FSR) array, a high-resolution virtual FSR sensor, and tactile sensors on the Barrett robotic hand. The results obtained confirm the transferability of learning from vision to touch to interpret 3D models. Due to its higher resolution, tactile data from optical tactile sensors was demonstrated to achieve higher classification rates based on visual features compared to other technologies relying on pressure measurements. Further analysis of the weight updates in the convolutional layer is performed to measure the similarity between visual and tactile features for each technology of tactile sensing. Comparing the weight updates in different convolutional layers suggests that by updating a few convolutional layers of a pre-trained CNN on visual data, it can be efficiently used to classify tactile data. Accordingly, we propose a hybrid architecture performing both visual and tactile 3D object recognition with a MobileNetV2 backbone. MobileNetV2 is chosen due to its smaller size and thus its capability to be implemented on mobile devices, such that the network can classify both visual and tactile data. An accuracy of 100% for visual and 77.63% for tactile data are achieved by the proposed architecture.
引用
收藏
页码:1 / 15
页数:15
相关论文
共 50 条
  • [1] Deep Active Cross-Modal Visuo-Tactile Transfer Learning for Robotic Object Recognition
    Murali, Prajval Kumar
    Wang, Cong
    Lee, Dongheui
    Dahiya, Ravinder
    Kaboli, Mohsen
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04): : 9557 - 9564
  • [2] VoxRec: Hybrid Convolutional Neural Network for Active 3D Object Recognition
    Karambakhsh, Ahmad
    Sheng, Bin
    Li, Ping
    Yang, Po
    Jung, Younhyun
    Feng, David Dagan
    IEEE ACCESS, 2020, 8 (08): : 70969 - 70980
  • [3] A Novel Method for Training Mice in Visuo-Tactile 3-D Object Discrimination and Recognition
    Hu, Xian
    Urhie, Ogaga
    Chang, Kevin
    Hostetler, Rachel
    Agmon, Ariel
    FRONTIERS IN BEHAVIORAL NEUROSCIENCE, 2018, 12
  • [4] 3D convolutional neural network for object recognition: a review
    Rahul Dev Singh
    Ajay Mittal
    Rajesh K. Bhatia
    Multimedia Tools and Applications, 2019, 78 : 15951 - 15995
  • [5] 3D convolutional neural network for object recognition: a review
    Singh, Rahul Dev
    Mittal, Ajay
    Bhatia, Rajesh K.
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (12) : 15951 - 15995
  • [6] Deep learning hybrid 3D/2D convolutional neural network for prostate MRI recognition.
    Van, Jasper
    Yoon, Choongheon
    Glavis-Bloom, Justin
    Bardis, Michelle
    Ushinsky, Alexander
    Chow, Daniel S.
    Chang, Peter
    Houshyar, Roozbeh
    Chantaduly, Chanon
    Grant, William A.
    Fujimoto, Dylann
    JOURNAL OF CLINICAL ONCOLOGY, 2019, 37 (15)
  • [7] Using 3D Convolutional Neural Networks for Tactile Object Recognition with Robotic Palpation
    Pastor, Francisco
    Gandarias, Juan M.
    Garcia-Cerezo, Alfonso J.
    Gomez-de-Gabriel, Jesus M.
    SENSORS, 2019, 19 (24)
  • [8] 3D Object Recognition Using Convolutional Neural Networks with Transfer Learning Between Input Channels
    Alexandre, Luis A.
    INTELLIGENT AUTONOMOUS SYSTEMS 13, 2016, 302 : 888 - 897
  • [9] Convolutional Neural Network for 3D Object Recognition using Volumetric Representation
    Xu, Xiaofan
    Dehghani, Alireza
    Corrigan, David
    Caulfield, Sam
    Moloney, David
    2016 FIRST INTERNATIONAL WORKSHOP ON SENSING, PROCESSING AND LEARNING FOR INTELLIGENT MACHINES (SPLINE), 2016,
  • [10] 3D Face Recognition Method Based on Deep Convolutional Neural Network
    Feng, Jianying
    Guo, Qian
    Guan, Yudong
    Wu, Mengdie
    Zhang, Xingrui
    Ti, Chunli
    SMART INNOVATIONS IN COMMUNICATION AND COMPUTATIONAL SCIENCES, VOL 2, 2019, 670 : 123 - 130