Enhanced-Deep-Residual-Shrinkage-Network-Based Voiceprint Recognition in the Electric Industry

被引:2
|
作者
Zhang, Qingrui [1 ]
Zhai, Hongting [1 ]
Ma, Yuanyuan [2 ]
Sun, Lili [1 ]
Zhang, Yantong [1 ]
Quan, Weihong [1 ]
Zhai, Qi [1 ]
He, Bangwei [2 ]
Bai, Zhiquan [2 ]
机构
[1] State Grid Shandong Elect Power Co, Informat & Telecommun Branch, Jinan 250001, Peoples R China
[2] Shandong Univ, Sch Informat Sci & Engn, Qingdao 266237, Peoples R China
关键词
voiceprint recognition; deep learning; deep residual shrinkage network; convolutional block attention mechanism; hybrid dilated convolution;
D O I
10.3390/electronics12143017
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Voiceprint recognition can extract voice features and identity the speaker through the voice information, which has great application prospects in personnel identity verification and voice dispatching in the electric industry. The traditional voiceprint recognition algorithms work well in a quiet environment. However, noise interference inevitably exists in the electric industry, degrading the accuracy of traditional voiceprint recognition algorithms. In this paper, we propose an enhanced deep residual shrinkage network (EDRSN)-based voiceprint recognition by combining the traditional voiceprint recognition algorithms with deep learning (DL) in the context of the noisy electric industry environment, where a dual-path convolution recurrent network (DPCRN) is employed to reduce the noise, and its structure is also improved based on the deep residual shrinkage network (DRSN). Moreover, we further use a convolutional block attention mechanism (CBAM) module and a hybrid dilated convolution (HDC) in the proposed EDRSN. Simulation results show that the proposed network can enhance the speaker's vocal features and further distinguish and eliminate the noise features, thus reducing the noise influence and achieving better recognition performance in a noisy electric environment.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Fish Recognition Based on Deep Residual Shrinkage Network
    Cheng, Long
    He, Chengwan
    2021 4TH INTERNATIONAL CONFERENCE ON ROBOTICS, CONTROL AND AUTOMATION ENGINEERING (RCAE 2021), 2021, : 36 - 39
  • [2] Speech Emotion Recognition Based on Deep Residual Shrinkage Network
    Han, Tian
    Zhang, Zhu
    Ren, Mingyuan
    Dong, Changchun
    Jiang, Xiaolin
    Zhuang, Quansheng
    ELECTRONICS, 2023, 12 (11)
  • [3] Radar air target recognition based on deep residual shrinkage network
    Yin, Jianguo
    Sheng, Wen
    Jiang, Wei
    Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2024, 46 (09): : 3012 - 3018
  • [4] Radar signal recognition method based on deep residual shrinkage attention network
    Cao P.
    Yang C.
    Chen Z.
    Wang L.
    Shi L.
    Xi Tong Gong Cheng Yu Dian Zi Ji Shu/Systems Engineering and Electronics, 2023, 45 (03): : 717 - 725
  • [5] Research on Voiceprint Recognition of Camouflage Voice Based on Deep Belief Network
    Nan Jiang
    Ting Liu
    International Journal of Automation and Computing, 2021, (06) : 947 - 962
  • [6] Research and application of voiceprint recognition based on a deep recurrent neural network
    Luo, K.
    Fu, L.
    AUTOMATIC CONTROL, MECHATRONICS AND INDUSTRIAL ENGINEERING, 2019, : 309 - 316
  • [7] Research on Voiceprint Recognition of Camouflage Voice Based on Deep Belief Network
    Jiang, Nan
    Liu, Ting
    INTERNATIONAL JOURNAL OF AUTOMATION AND COMPUTING, 2021, 18 (06) : 947 - 962
  • [8] A Spectrogram-based Voiceprint Recognition Using Deep Neural Network
    Li, Penghua
    Chen, Minglong
    Hu, Fangchao
    Xu, Yang
    2015 27TH CHINESE CONTROL AND DECISION CONFERENCE (CCDC), 2015, : 2947 - 2951
  • [9] Automatic Modulation Recognition Method Based on Phase Transformation and Deep Residual Shrinkage Network
    Chen, Hao
    Guo, Wenpu
    Kang, Kai
    Hu, Guojie
    ELECTRONICS, 2024, 13 (11)
  • [10] Research on Voiceprint Recognition of Camouflage Voice Based on Deep Belief Network
    Nan Jiang
    Ting Liu
    International Journal of Automation and Computing, 2021, 18 : 947 - 962