DL-Net: Sparsity Prior Learning for Grasp Pattern Recognition

被引:2
作者
Huang, Ziwei [1 ]
Zheng, Jingjing [2 ]
Zhao, Li [1 ]
Chen, Huiling [1 ]
Jiang, Xianta [2 ]
Zhang, Xiaoqin [1 ]
机构
[1] Wenzhou Univ, Coll Comp Sci & Artificial Intelligence, Wenzhou 325035, Zhejiang, Peoples R China
[2] Mem Univ Newfoundland, Dept Comp Sci, St John, NF A1B 3X7, Canada
来源
IEEE ACCESS | 2023年 / 11卷
基金
中国国家自然科学基金;
关键词
Feature extraction; Pattern recognition; Dictionaries; Sparse matrices; Convolutional neural networks; Image reconstruction; Deep learning; Computer vision; Grasp pattern recognition; computer vision; dictionary learning; deep learning; CONVOLUTIONAL NEURAL-NETWORKS;
D O I
10.1109/ACCESS.2023.3236402
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The purpose of grasp pattern recognition is to determine the grasp type for an object to be grasped, which can be applied to prosthetic hand control and ease the burden of amputees. To enhance the performance of grasp pattern recognition, we propose a network DL-Net inspired by dictionary learning. Our method includes two parts: 1) forward propagation for sparsity representation learning and 2) backward propagation for dictionary learning, which utilizes the sparsity prior effectively and learns a discriminative dictionary with stronger expressive ability from a mass of training data. The experiment was performed on two household object datasets, the RGB-D Object dataset, and the Hit-GPRec dataset. The experimental results illustrate that the DL-Net performs better than traditional deep learning methods in grasp pattern recognition.
引用
收藏
页码:6444 / 6451
页数:8
相关论文
共 50 条
  • [31] A Deep Learning-Based Ultrasonic Pattern Recognition Method for Inspecting Girth Weld Cracking of Gas Pipeline
    Yan, Y.
    Liu, D.
    Gao, B.
    Tian, G. Y.
    Cai, Z. C.
    IEEE SENSORS JOURNAL, 2020, 20 (14) : 7997 - 8006
  • [32] Optically interconnected Kohonen net for pattern recognition
    Doi, T
    Namba, T
    Uehara, A
    Nagata, M
    Miyazaki, S
    Shibahara, K
    Yokoyama, S
    Iwata, A
    Ae, T
    Hirose, M
    JAPANESE JOURNAL OF APPLIED PHYSICS PART 1-REGULAR PAPERS SHORT NOTES & REVIEW PAPERS, 1996, 35 (2B): : 1405 - 1409
  • [33] Robot grasp pattern recognition based on wavlet and BP neural network
    Chen, Jinjun
    Xiang, Ting
    2013 INTERNATIONAL CONFERENCE ON PROCESS EQUIPMENT, MECHATRONICS ENGINEERING AND MATERIAL SCIENCE, 2013, 331 : 290 - 293
  • [34] ISL recognition system using integrated mobile-net and transfer learning method
    Sharma, Sakshi
    Singh, Sukhwinder
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 221
  • [35] A Comparative Study to Deep Learning for Pattern Recognition, By using Online and Batch Learning; Taking Cybersecurity as a case
    Djellali, Choukri
    Adda, Mehdi
    Moutacalli, Mohamed Tarik
    PROCEEDINGS OF THE 2019 IEEE/ACM INTERNATIONAL CONFERENCE ON ADVANCES IN SOCIAL NETWORKS ANALYSIS AND MINING (ASONAM 2019), 2019, : 907 - 912
  • [36] Pattern Recognition of Acute Lymphoblastic Leukemia (ALL) Using Computational Deep Learning
    Jiwani, Nasmin
    Gupta, Ketan
    Pau, Giovanni
    Alibakhshikenari, Mohammad
    IEEE ACCESS, 2023, 11 : 29541 - 29553
  • [37] Generator Stator Partial Discharge Pattern Recognition Based on PRPD-Grabcut and DSC-GoogLeNet Deep Learning
    Chen, Yuzhu
    Peng, Xiaosheng
    Wang, Hongyu
    Zhou, Jin
    Zhang, Yue
    Liang, Zhiming
    IEEE TRANSACTIONS ON DIELECTRICS AND ELECTRICAL INSULATION, 2023, 30 (05) : 2267 - 2276
  • [38] Process planning for die and mold machining based on pattern recognition and deep learning
    Hashimoto, Mayu
    Nakamoto, Keiichi
    JOURNAL OF ADVANCED MECHANICAL DESIGN SYSTEMS AND MANUFACTURING, 2021, 15 (02)
  • [39] Correlation Net: Spatiotemporal multimodal deep learning for action recognition
    Yudistira, Novanto
    Kurita, Takio
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2020, 82
  • [40] DL-DARE: Deep learning-based different activity recognition for the human–robot interaction environment
    Sachin Kansal
    Sagar Jha
    Prathamesh Samal
    Neural Computing and Applications, 2023, 35 : 12029 - 12037