MOSRA: Joint Mean Opinion Score and Room Acoustics Speech Quality Assessment

被引:3
作者
El Hajal, Karl [1 ,2 ]
Cernak, Milos [1 ]
Mainar, Pablo [1 ]
机构
[1] Logitech Europe SA, Lausanne, Switzerland
[2] Ecole Polytech Fed Lausanne, Lausanne, Switzerland
来源
INTERSPEECH 2022 | 2022年
关键词
Speech quality assessment; joint learning; room acoustics; BAND;
D O I
10.21437/Interspeech.2022-10698
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
The acoustic environment can degrade speech quality during communication (e.g., video call, remote presentation, outside voice recording), and its impact is often unknown. Objective metrics for speech quality have proven challenging to develop given the multi-dimensionality of factors that affect speech quality and the difficulty of collecting labeled data. Hypothesizing the impact of acoustics on speech quality, this paper presents MOSRA: a non-intrusive multi-dimensional speech quality metric that can predict room acoustics parameters (SNR, STI, T60, DRR, and C50) alongside the overall mean opinion score (MOS) for speech quality. By explicitly optimizing the model to learn these room acoustics parameters, we can extract more informative features and improve the generalization for the MOS task when the training data is limited. Furthermore, we also show that this joint training method enhances the blind estimation of room acoustics, improving the performance of current state-of-the-art models. An additional side-effect of this joint prediction is the improvement in the explainability of the predictions, which is a valuable feature for many applications.
引用
收藏
页码:3313 / 3317
页数:5
相关论文
共 26 条
[1]  
[Anonymous], 2018, PERCEPTUAL OBJECTIVE
[2]  
[Anonymous], 2020, METHODS METRICS PROC
[3]  
[Anonymous], 2008, 2023933 ETSI EG
[4]  
[Anonymous], 1996, METHODS OBJECTIVE SU
[5]  
[Anonymous], 2021, SUBJECTIVE EVALUATIO
[6]   Effects of Noise, Nonlinear Processing, and Linear Filtering on Perceived Speech Quality [J].
Arehart, Kathryn H. ;
Kates, James M. ;
Anderson, Melinda C. .
EAR AND HEARING, 2010, 31 (03) :420-436
[7]  
Avila AR, 2019, INT CONF ACOUST SPEE, P631, DOI [10.1109/ICASSP.2019.8683175, 10.1109/icassp.2019.8683175]
[8]  
Callens P., 2020, ARXIV201011167
[9]   Multitask learning [J].
Caruana, R .
MACHINE LEARNING, 1997, 28 (01) :41-75
[10]  
Catellier AA, 2020, INT CONF ACOUST SPEE, P331, DOI [10.1109/icassp40776.2020.9054204, 10.1109/ICASSP40776.2020.9054204]