Age and Gender Recognition Using a Convolutional Neural Network with a Specially Designed Multi-Attention Module through Speech Spectrograms

被引:39
作者
Tursunov, Anvarjon [1 ]
Mustageem [1 ]
Choeh, Joon Yeon [2 ]
Kwon, Soonil [1 ]
机构
[1] Sejong Univ, Dept Software, Interact Technol Lab, Seoul 05006, South Korea
[2] Sejong Univ, Dept Software, Intelligent Contents Lab, Seoul 05006, South Korea
关键词
human-computer interaction; convolutional neural network; multi-attention module; age and gender recognition; speech signals; SPEAKER AGE; DEEP; CLASSIFICATION; LSTM; CNN;
D O I
10.3390/s21175892
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Speech signals are being used as a primary input source in human-computer interaction (HCI) to develop several applications, such as automatic speech recognition (ASR), speech emotion recognition (SER), gender, and age recognition. Classifying speakers according to their age and gender is a challenging task in speech processing owing to the disability of the current methods of extracting salient high-level speech features and classification models. To address these problems, we introduce a novel end-to-end age and gender recognition convolutional neural network (CNN) with a specially designed multi-attention module (MAM) from speech signals. Our proposed model uses MAM to extract spatial and temporal salient features from the input data effectively. The MAM mechanism uses a rectangular shape filter as a kernel in convolution layers and comprises two separate time and frequency attention mechanisms. The time attention branch learns to detect temporal cues, whereas the frequency attention module extracts the most relevant features to the target by focusing on the spatial frequency features. The combination of the two extracted spatial and temporal features complements one another and provide high performance in terms of age and gender classification. The proposed age and gender classification system was tested using the Common Voice and locally developed Korean speech recognition datasets. Our suggested model achieved 96%, 73%, and 76% accuracy scores for gender, age, and age-gender classification, respectively, using the Common Voice dataset. The Korean speech recognition dataset results were 97%, 97%, and 90% for gender, age, and age-gender recognition, respectively. The prediction performance of our proposed model, which was obtained in the experiments, demonstrated the superiority and robustness of the tasks regarding age, gender, and age-gender recognition from speech signals.
引用
收藏
页数:19
相关论文
共 51 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]  
Ali Mushir., 2012, INT J SCI RES PUBLIC, V2, P1
[3]   Deep-Net: A Lightweight CNN-Based Speech Emotion Recognition System Using Deep Frequency Features [J].
Anvarjon, Tursunov ;
Mustaqeem ;
Kwon, Soonil .
SENSORS, 2020, 20 (18) :1-16
[4]  
Ardila R, 2020, PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2020), P4218
[5]   Speaker age estimation using i-vectors [J].
Bahari, Mohamad Hasan ;
McLaren, Mitchell ;
Hugo Van Hamme ;
van Leeuwen, David A. .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2014, 34 :99-108
[6]   How the brain separates sounds [J].
Carlyon, RP .
TRENDS IN COGNITIVE SCIENCES, 2004, 8 (10) :465-471
[7]  
Cristianini N., 2008, Encyclopedia of Algorithms, P928, DOI [DOI 10.1007/978-0-387-30162-4415, 10.1007/978-0-387-30162-4_415, DOI 10.1007/978-0-387-30162-4_415]
[8]   Objective Gender and Age Recognition from Speech Sentences [J].
Faek, Fatima K. .
ARO-THE SCIENTIFIC JOURNAL OF KOYA UNIVERSITY, 2015, 3 (02) :24-29
[9]   End-to-End Deep Neural Network Age Estimation [J].
Ghahremani, Pegah ;
Nidadavolu, Phani Sankar ;
Chen, Nanxin ;
Villalba, Jesus ;
Povey, Daniel ;
Khudanpur, Sanjeev ;
Dehak, Najim .
19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, :277-281
[10]   AST: Audio Spectrogram Transformer [J].
Gong, Yuan ;
Chung, Yu-An ;
Glass, James .
INTERSPEECH 2021, 2021, :571-575