A Study of Gender Impact in Self-supervised Models for Speech-to-Text Systems

被引:9
作者
Boito, Marcely Zanon [1 ]
Besacier, Laurent [2 ]
Tomashenko, Natalia [1 ]
Esteve, Yannick [1 ]
机构
[1] Avignon Univ, LIA, Avignon, France
[2] NAVER LABS Europe, Meylan, France
来源
INTERSPEECH 2022 | 2022年
基金
欧盟地平线“2020”;
关键词
self-supervised models; gender bias; speech-to-text; automatic speech recognition; speech translation;
D O I
10.21437/Interspeech.2022-353
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Self-supervised models for speech processing emerged recently as popular foundation blocks in speech processing pipelines. These models are pre-trained on unlabeled audio data and then used in speech processing downstream tasks such as automatic speech recognition (ASR) or speech translation (ST). Since these models are now used in research and industrial systems alike, it becomes necessary to understand the impact caused by some features such as gender distribution within pre-training data. Using French as our investigation language, we train and compare gender-specific wav2vec 2.0 models against models containing different degrees of gender balance in their pre-training data. The comparison is performed by applying these models to two speech-to-text downstream tasks: ASR and ST. Results show the type of downstream integration matters. We observe lower overall performance using gender-specific pre-training before fine-tuning an end-to-end ASR system. However, when self-supervised models are used as feature extractors, the overall ASR and ST results follow more complex patterns in which the balanced pre-trained model does not necessarily lead to the best results. Lastly, our crude 'fairness' metric, the relative performance difference measured between female and male test sets, does not display a strong variation from balanced to gender-specific pre-trained wav2vec 2.0 models.
引用
收藏
页码:1278 / 1282
页数:5
相关论文
共 46 条
[1]  
Adda-Decker M., 2005, INTERSPEECH
[2]  
[Anonymous], 2010, LREC
[3]  
ATILF, 2020, TCOF TRAIT CORP OR F
[4]  
Babu A., 2021, ARXIV211109296
[5]  
Baevski A., 2020, wav2vec 2.0: A framework for self-supervised learning of speech representations
[6]  
Baevski A, 2019, ARXIV191103912
[7]  
Banziger T., 2012, EMOTION WASHINGTON D
[8]  
Branca-Rosoff S., 2012, DISCOURS VILLE PRESE
[9]  
Conneau A., 2020, INTERSPEECH
[10]  
Costa-juss M. R., 2022, LREC