Fake Moods: Can Users Trick an Emotion-Aware VoiceBot?

被引:5
|
作者
Ma, Yong [1 ]
Drewes, Heiko [1 ]
Butz, Andreas [1 ]
机构
[1] Ludwig Maximilians Univ Munchen, Munich, Germany
来源
EXTENDED ABSTRACTS OF THE 2021 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'21) | 2021年
关键词
Speech Emotion Detection; Emotion-Aware VoiceBot; Data Acquisition for Training Neural Networks; SPEECH; RECOGNITION; FEATURES;
D O I
10.1145/3411763.3451744
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The ability to deal properly with emotion could be a critical feature of future VoiceBots. Humans might even choose to use fake emotions, e.g., sound angry to emphasize what they are saying or sound nice to get what they want. However, it is unclear whether current emotion detection methods detect such acted emotions properly, or rather the true emotion of the speaker. We asked a small number of participants (26) to mimic five basic emotions and used an open source emotion-in-voice detector to provide feedback on whether their acted emotion was recognized as intended. We found that it was difficult for participants to mimic all five emotions and that certain emotions were easier to mimic than others. However, it remains unclear whether this is due to the fact that emotion was only acted or due to the insufficiency of the detection software. As an intended side effect, we collected a small corpus of labeled data for acted emotion in speech, which we plan to extend and eventually use as training data for our own emotion detection. We present the study setup and discuss some insights on our results.
引用
收藏
页数:4
相关论文
共 50 条
  • [21] What If Bots Feel Moods? Towards Controllable Retrieval-based Dialogue Systems with Emotion-Aware Transition Networks
    Qiu, Lisong
    Shiu, Yingwai
    Lin, Pingping
    Song, Ruihua
    Liu, Yue
    Zhao, Dongyan
    Yan, Rui
    PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 1161 - 1170
  • [22] Emotion-Aware Music Driven Movie Montage
    Wu-Qin Liu
    Min-Xuan Lin
    Hai-Bin Huang
    Chong-Yang Ma
    Yu Song
    Wei-Ming Dong
    Chang-Sheng Xu
    Journal of Computer Science and Technology, 2023, 38 : 540 - 553
  • [23] Personalized Emotion-Aware Video Streaming for the Elderly
    Dong, Yi
    Hu, Han
    Wen, Yonggang
    Yu, Han
    Miao, Chunyan
    SOCIAL COMPUTING AND SOCIAL MEDIA: TECHNOLOGIES AND ANALYTICS, SCSM 2018, PT II, 2018, 10914 : 372 - 382
  • [24] Emotion-Aware Speaker Identification With Transfer Learning
    Noh, Kyoungju
    Jeong, Hyuntae
    IEEE ACCESS, 2023, 11 : 77292 - 77306
  • [25] EmoMTB: Emotion-aware Music Tower Blocks
    Melchiorre, Alessandro B.
    Penz, David
    Ganhoer, Christian
    Lesota, Oleg
    Fragoso, Vasco
    Fritzl, Florian
    Parada-Cabaleiro, Emilia
    Schubert, Franz
    Schedl, Markus
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2022, 2022, : 206 - 210
  • [26] Emotion-Aware Music Driven Movie Montage
    Liu, Wu-Qin
    Lin, Min-Xuan
    Huang, Hai-Bin
    Ma, Chong-Yang
    Song, Yu
    Dong, Wei-Ming
    Xu, Chang-Sheng
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2023, 38 (03) : 540 - 553
  • [27] Physiological mouse: toward an emotion-aware mouse
    Yujun Fu
    Hong Va Leong
    Grace Ngai
    Michael Xuelin Huang
    Stephen C. F. Chan
    Universal Access in the Information Society, 2017, 16 : 365 - 379
  • [28] Towards Emotion-Aware Agents For Negotiation Dialogues
    Chawla, Kushal
    Clever, Rene
    Ramirez, Jaysa
    Lucas, Gale
    Gratch, Jonathan
    2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2021,
  • [29] Modeling Protagonist Emotions for Emotion-Aware Storytelling
    Brahman, Faeze
    Chaturvedi, Snigdha
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 5277 - 5294
  • [30] Emotion-Aware System for Upper Extremity Rehabilitation
    Mihelj, Matjaz
    Novak, Domen
    Munih, Marko
    2009 VIRTUAL REHABILITATION INTERNATIONAL CONFERENCE, 2009, : 160 - 165