Dr.VOT : Measuring Positive and Negative Voice Onset Time in the Wild

被引:5
作者
Shrem, Yosi [1 ]
Goldrick, Matthew [2 ]
Keshet, Joseph [1 ]
机构
[1] Bar Ilan Univ, Dept Comp Sci, Ramat Gan, Israel
[2] Northwestern Univ, Dept Linguist, Evanston, IL USA
来源
INTERSPEECH 2019 | 2019年
关键词
structured prediction; multi-task learning; adversarial training; recurrent neural networks; sequence segmentation; LANGUAGE; SPEECH; VOT;
D O I
10.21437/Interspeech.2019-1735
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Voice Onset Time (VOT), a key measurement of speech for basic research and applied medical studies, is the time between the onset of a stop burst and the onset of voicing. When the voicing onset precedes burst onset the VOT is negative; if voicing onset follows the burst, it is positive. In this work, we present a deep-learning model for accurate and reliable measurement of VOT in naturalistic speech. The proposed system addresses two critical issues: it can measure positive and negative VOT equally well, and it is trained to be robust to variation across annotations. Our approach is based on the structured prediction framework, where the feature functions are defined to be RNNs. These learn to capture segmental variation in the signal. Results suggest that our method substantially improves over the current state-of-the-art. In contrast to previous work, our Deep and Robust VOT annotator, Dr.VOT, can successfully estimate negative VOTs while maintaining state-of-the-art performance on positive VOTs. This high level of performance generalizes to new corpora without further retraining.
引用
收藏
页码:629 / 633
页数:5
相关论文
共 34 条
[1]   Voice Onset Time (VOT) at 50: Theoretical and practical issues in measuring voicing distinctions [J].
Abramson, Arthur S. ;
Whalen, D. H. .
JOURNAL OF PHONETICS, 2017, 63 :75-86
[2]  
Adi Y., 2017, ICASSP
[3]  
Adi Y, 2015, IEEE INT WORKS MACH, DOI 10.1109/MLSP.2015.7324331
[4]   Automatic Measurement of Voice Onset Time and Prevoicing using Recurrent Neural Networks [J].
Adil, Yossi ;
Keshetl, Joseph ;
Dmitrieva, Olga ;
Goldrick, Matt .
17TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2016), VOLS 1-5: UNDERSTANDING SPEECH PROCESSING IN HUMANS AND MACHINES, 2016, :3152-3155
[5]  
Auzou P, 2000, CLIN LINGUIST PHONET, V14, P131
[6]   Mechanisms of interaction in speech production [J].
Baese-Berk, Melissa ;
Goldrick, Matthew .
LANGUAGE AND COGNITIVE PROCESSES, 2009, 24 (04) :527-554
[7]   Dynamically adapted context-specific hyper-articulation: Feedback from interlocutors affects speakers' subsequent pronunciations [J].
Buz, Esteban ;
Tanenhaus, Michael K. ;
Jaeger, T. Florian .
JOURNAL OF MEMORY AND LANGUAGE, 2016, 89 :68-86
[8]   Multitask learning [J].
Caruana, R .
MACHINE LEARNING, 1997, 28 (01) :41-75
[9]   Variation and universals in VOT: evidence from 18 languages [J].
Cho, T ;
Ladefoged, P .
JOURNAL OF PHONETICS, 1999, 27 (02) :207-229
[10]  
Collobert R, 2011, J MACH LEARN RES, V12, P2493