A Deep Learning Method Based on Triplet Network Using Self-Attention for Tactile Grasp Outcomes Prediction

被引:6
|
作者
Liu, Chengliang [1 ,2 ]
Yi, Zhengkun [1 ,2 ]
Huang, Binhua [1 ]
Zhou, Zhenning [1 ,2 ]
Fang, Senlin [1 ,3 ]
Li, Xiaoyu [1 ]
Zhang, Yupo [1 ]
Wu, Xinyu [1 ,2 ,4 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, Guangdong Prov Key Lab Robot & Intelligent Syst, Shenzhen 518055, Peoples R China
[2] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
[3] City Univ Macau, Fac Data Sci, Macau 999078, Peoples R China
[4] Shenzhen Inst Artificial Intelligence & Robot Soc, SIAT Branch, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
Index Terms-Contrastive learning; deep learning; grasping; self-attention; triplet network; SLIP;
D O I
10.1109/TIM.2023.3285986
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Recent research work has demonstrated that pregrasp tactile information can be used to effectively predict whether a grasp will be successful or not. However, most of the existing grasp prediction models do not perform satisfactorily with a small available dataset. In this article, we propose a deep network framework based on triplet network with self-attention mechanisms for grasp outcomes prediction. By forming the samples into contrasting triplets, our method can generate more sample units and discover potential connections between samples by contrasting with the triplet loss. In addition, the inclusion of the self-attention mechanisms helps capture the internal correlation of features, further improving the performance of the network. We also validate that the self-attention module works better as a nonlinear projection head for contrast learning than the multilayer perceptron module. Experimental results on the publicly available dataset show that the proposed framework is effective.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Self-Attention Based Visual-Tactile Fusion Learning for Predicting Grasp Outcomes
    Cui, Shaowei
    Wang, Rui
    Wei, Junhang
    Hu, Jingyi
    Wang, Shuo
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04) : 5827 - 5834
  • [2] Prosodic Structure Prediction using Deep Self-attention Neural Network
    Du, Yao
    Wu, Zhiyong
    Kang, Shiyin
    Su, Dan
    Yu, Dong
    Meng, Helen
    2019 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2019, : 320 - 324
  • [3] Missing well logs prediction using deep learning integrated neural network with the self-attention mechanism
    Wang, Jun
    Cao, Junxing
    Fu, Jingcheng
    Xu, Hanqing
    ENERGY, 2022, 261
  • [4] Reconstructing computational spectra using deep learning’s self-attention method
    Wu, Hao
    Wu, Hui
    Su, Xinyu
    Wu, Jingjun
    Liu, Shuangli
    Optica Applicata, 2024, 54 (03) : 383 - 394
  • [5] A Cross-Project Defect Prediction Model Based on Deep Learning With Self-Attention
    Wen, Wanzhi
    Zhang, Ruinian
    Wang, Chuyue
    Shen, Chenqiang
    Yu, Meng
    Zhang, Suchuan
    Gao, Xinxin
    IEEE ACCESS, 2022, 10 : 110385 - 110401
  • [6] Deep Learning Based on Hierarchical Self-Attention for Finance Distress Prediction Incorporating Text
    Ruan, Sumei
    Sun, Xusheng
    Yao, Ruanxingchen
    Li, Wei
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2021, 2021
  • [7] A deep learning sequence model based on self-attention and convolution for wind power prediction
    Liu, Chien-Liang
    Chang, Tzu-Yu
    Yang, Jie-Si
    Huang, Kai-Bin
    RENEWABLE ENERGY, 2023, 219
  • [8] Deep Clustering Efficient Learning Network for Motion Recognition Based on Self-Attention Mechanism
    Ru, Tielin
    Zhu, Ziheng
    APPLIED SCIENCES-BASEL, 2023, 13 (05):
  • [9] Image Super-Resolution Reconstruction Method Based on Self-Attention Deep Network
    Chen Zihan
    Wu Haobo
    Pei Haodong
    Chen Rong
    Hu Jiaxin
    Shi Hengtong
    LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (04)
  • [10] Predict Tactile Grasp Outcomes Based on Attention and Low-Rank Fusion Network
    Wu, Peng
    Chu, Chiawei
    Liu, Chengliang
    Fang, Senlin
    Wang, Jingnan
    Liu, Jiashu
    Yi, Zhengkun
    IEEE SENSORS JOURNAL, 2024, 24 (24) : 42500 - 42510