Automatic detection of expressed emotion from Five-Minute Speech Samples: Challenges and opportunities

被引:2
作者
Mirheidari, Bahman [1 ]
Bittar, Andre [2 ]
Cummins, Nicholas [2 ,3 ]
Downs, Johnny [3 ]
Fisher, Helen L. [4 ,5 ]
Christensen, Heidi [1 ]
机构
[1] Univ Sheffield, Dept Comp Sci, Sheffield, England
[2] Kings Coll London, Inst Psychiat Psychol & Neurosci, Dept Biostat & Hlth Informat, London, England
[3] Kings Coll London, CAMHS Digital Lab, Dept Child & Adolescent Psychiat, Inst Psychiat Psychol & Neurosci, London, England
[4] Kings Coll London, Inst Psychiat Psychol & Neurosci, Social Genet & Dev Psychiat Ctr, London, England
[5] Kings Coll London, ESRC Ctr Soc & Mental Hlth, London, England
基金
英国医学研究理事会; 英国经济与社会研究理事会;
关键词
FAMILY-LIFE; DATABASES; MODELS;
D O I
10.1371/journal.pone.0300518
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Research into clinical applications of speech-based emotion recognition (SER) technologies has been steadily increasing over the past few years. One such potential application is the automatic recognition of expressed emotion (EE) components within family environments. The identification of EE is highly important as they have been linked with a range of adverse life events. Manual coding of these events requires time-consuming specialist training, amplifying the need for automated approaches. Herein we describe an automated machine learning approach for determining the degree of warmth, a key component of EE, from acoustic and text natural language features. Our dataset of 52 recorded interviews is taken from recordings, collected over 20 years ago, from a nationally representative birth cohort of British twin children, and was manually coded for EE by two researchers (inter-rater reliability 0.84-0.90). We demonstrate that the degree of warmth can be predicted with an F1-score of 64.7% despite working with audio recordings of highly variable quality. Our highly promising results suggest that machine learning may be able to assist in the coding of EE in the near future.
引用
收藏
页数:16
相关论文
共 46 条
[1]   Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers [J].
Akcay, Mehmet Berkehan ;
Oguz, Kaya .
SPEECH COMMUNICATION, 2020, 116 :56-76
[2]   Stacked Recurrent Neural Networks for Speech-Based Inference of Attachment Condition in School Age Children [J].
Alsofyani, Huda ;
Vinciarelli, Alessandro .
INTERSPEECH 2021, 2021, :2491-2495
[3]   Robust Unsupervised Arousal Rating: A Rule-Based Framework with Knowledge-Inspired Vocal Features [J].
Bone, Daniel ;
Lee, Chi-Chun ;
Narayanan, Shrikanth .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2014, 5 (02) :201-213
[4]   INFLUENCE OF FAMILY LIFE ON COURSE OF SCHIZOPHRENIC DISORDERS - REPLICATION [J].
BROWN, GW ;
BIRLEY, JLT ;
WING, JK .
BRITISH JOURNAL OF PSYCHIATRY, 1972, 121 (562) :241-+
[5]  
Cambria E, 2017, SOCIO AFFECT COMPUT, V5, P1, DOI 10.1007/978-3-319-55394-8_1
[6]  
Carletta J, 2005, LECT NOTES COMPUT SC, V3869, P28
[7]   Maternal expressed emotion predicts children's antisocial behavior problems: Using monozygotic-twin differences to identify environmental effects on behavioral development [J].
Caspi, A ;
Moffitt, TE ;
Morgan, J ;
Rutter, M ;
Taylor, A ;
Arseneault, L ;
Tully, L ;
Jacobs, C ;
Kim-Cohen, J ;
Polo-Tomas, M .
DEVELOPMENTAL PSYCHOLOGY, 2004, 40 (02) :149-161
[8]  
Devlin J, 2019, Arxiv, DOI arXiv:1810.04805
[9]  
Eyben F., 2010, Proceedings of the 18th ACM International Conference on Multimedia, P1459
[10]   The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing [J].
Eyben, Florian ;
Scherer, Klaus R. ;
Schuller, Bjoern W. ;
Sundberg, Johan ;
Andre, Elisabeth ;
Busso, Carlos ;
Devillers, Laurence Y. ;
Epps, Julien ;
Laukka, Petri ;
Narayanan, Shrikanth S. ;
Truong, Khiet P. .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2016, 7 (02) :190-202