Age and Gender Recognition Using a Convolutional Neural Network with a Specially Designed Multi-Attention Module through Speech Spectrograms

被引:39
作者
Tursunov, Anvarjon [1 ]
Mustageem [1 ]
Choeh, Joon Yeon [2 ]
Kwon, Soonil [1 ]
机构
[1] Sejong Univ, Dept Software, Interact Technol Lab, Seoul 05006, South Korea
[2] Sejong Univ, Dept Software, Intelligent Contents Lab, Seoul 05006, South Korea
关键词
human-computer interaction; convolutional neural network; multi-attention module; age and gender recognition; speech signals; SPEAKER AGE; DEEP; CLASSIFICATION; LSTM; CNN;
D O I
10.3390/s21175892
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Speech signals are being used as a primary input source in human-computer interaction (HCI) to develop several applications, such as automatic speech recognition (ASR), speech emotion recognition (SER), gender, and age recognition. Classifying speakers according to their age and gender is a challenging task in speech processing owing to the disability of the current methods of extracting salient high-level speech features and classification models. To address these problems, we introduce a novel end-to-end age and gender recognition convolutional neural network (CNN) with a specially designed multi-attention module (MAM) from speech signals. Our proposed model uses MAM to extract spatial and temporal salient features from the input data effectively. The MAM mechanism uses a rectangular shape filter as a kernel in convolution layers and comprises two separate time and frequency attention mechanisms. The time attention branch learns to detect temporal cues, whereas the frequency attention module extracts the most relevant features to the target by focusing on the spatial frequency features. The combination of the two extracted spatial and temporal features complements one another and provide high performance in terms of age and gender classification. The proposed age and gender classification system was tested using the Common Voice and locally developed Korean speech recognition datasets. Our suggested model achieved 96%, 73%, and 76% accuracy scores for gender, age, and age-gender classification, respectively, using the Common Voice dataset. The Korean speech recognition dataset results were 97%, 97%, and 90% for gender, age, and age-gender recognition, respectively. The prediction performance of our proposed model, which was obtained in the experiments, demonstrated the superiority and robustness of the tasks regarding age, gender, and age-gender recognition from speech signals.
引用
收藏
页数:19
相关论文
共 51 条
[21]  
McFee B., 2015, P PYTH SCI C, P18, DOI [10.25080/Majora-7b98e3ed-003, 10. 25080/Majora-7b98e3ed-003]
[22]  
Meena K, 2013, INT ARAB J INF TECHN, V10, P477
[23]  
Michael Henretty T.K., K DAVIS COMMON VOICE
[24]  
Miyazaki K, 2020, INT CONF ACOUST SPEE, P66, DOI [10.1109/icassp40776.2020.9053609, 10.1109/ICASSP40776.2020.9053609]
[25]   Human action recognition using attention based LSTM network with dilated CNN features [J].
Muhammad, Khan ;
Mustaqeem ;
Ullah, Amin ;
Imran, Ali Shariq ;
Sajjad, Muhammad ;
Kiran, Mustafa Servet ;
Sannino, Giovanna ;
de Albuquerque, Victor Hugo C. .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2021, 125 :820-830
[26]   CLSTM: Deep Feature-Based Speech Emotion Recognition Using the Hierarchical ConvLSTM Network [J].
Mustageem ;
Kwon, Soonil .
MATHEMATICS, 2020, 8 (12) :1-19
[27]   Optimal feature selection based speech emotion recognition using two-stream deep convolutional neural network [J].
Mustaqeem ;
Kwon, Soonil .
INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2021, 36 (09) :5116-5135
[28]   Att-Net: Enhanced emotion recognition system using lightweight self-attention module [J].
Mustaqeem ;
Kwon, Soonil .
APPLIED SOFT COMPUTING, 2021, 102
[29]   1D-CNN: Speech Emotion Recognition System Using a Stacked Network with Dilated CNN Features [J].
Mustaqeem ;
Kwon, Soonil .
CMC-COMPUTERS MATERIALS & CONTINUA, 2021, 67 (03) :4039-4059
[30]   MLT-DNet: Speech emotion recognition using 1D dilated CNN based on multi-learning trick approach [J].
Mustaqeem ;
Kwon, Soonil .
EXPERT SYSTEMS WITH APPLICATIONS, 2021, 167