I See What You're Hearing: Facilitating The Effect of Environment on Perceived Emotion While Teleconferencing

被引:1
作者
Marino D. [1 ]
Henry M. [1 ]
Fortin P.E. [2 ]
Bhayana R. [3 ]
Cooperstock J. [1 ]
机构
[1] McGill University, Center for Intelligent Machines, Montreal, H3A 0G4, QC
[2] McGill University, Dept. of Electrical and Computer Engineering, Montreal, H3A 0G4, QC
[3] Indraprastha Institute of Information Technology, Dept. of Human Centered Design, New Delhi, Delhi
关键词
context; multimodal; teleconferencing; visualization;
D O I
10.1145/3579495
中图分类号
学科分类号
摘要
Our perception of emotion is highly contextual. Changes in the environment can affect our narrative framing, and thus augment our emotional perception of interlocutors. User environments are typically heavily suppressed due to the technical limitations of commercial videoconferencing platforms. As a result, there is often a lack of contextual awareness while participating in a video call, and this affects how we perceive the emotions of conversants. We present a videoconferencing module that visualizes the user's aural environment to enhance awareness between interlocutors. The system visualizes environmental sound based on its semantic and acoustic properties. We found that our visualization system was about 50% effective at eliciting emotional perceptions in users that was similar to the response elicited by environmental sound it replaced.The contributed system provides a unique approach to facilitate ambient awareness on an implicit emotional level in situations where multimodal environmental context is suppressed. © 2023 ACM.
引用
收藏
相关论文
共 35 条
[11]  
Dourish P., Bly S., Portholes: Supporting awareness in a distributed work group, Proceedings of the SIGCHI conference on Human factors in computing systems., pp. 541-547, (1992)
[12]  
D'Onofrio A., Phonetic detail and dimensionality in sound-shape correspondences: Refining the bouba-kiki paradigm, Language and speech, 57, 3, pp. 367-393, (2014)
[13]  
El Badawy D., Dokmanic I., Vetterli M., Acoustic DoA estimation by one unsophisticated sensor, International Conference on Latent Variable Analysis and Signal Separation, pp. 89-98, (2017)
[14]  
Erlingsson C., Brysiewicz P., A hands-on guide to doing content analysis, African Journal of Emergency Medicine, 7, 3, pp. 93-99, (2017)
[15]  
Forceville C., Non-verbal and multimodal metaphor in a cognitivist framework: Agendas for research, Multimodal metaphor, pp. 19-44, (2009)
[16]  
Raymond Gibbs W., Evaluating conceptual metaphor theory, Discourse processes, 48, 8, pp. 529-562, (2011)
[17]  
Goller A.I., Otten L.J., Ward J., Seeing sounds and hearing colors: An event-related potential study of auditory-visual synesthesia, Journal of Cognitive Neuroscience, 21, 10, pp. 1869-1881, (2009)
[18]  
Greenaway K.H., Kalokerinos E.K., Williams L.A., Context is everything (in emotion research), Social and Personality Psychology Compass, 12, 6, (2018)
[19]  
Greenhalgh C., Benford S., MASSIVE: A collaborative virtual environment for teleconferencing, ACM Transactions on Computer-Human Interaction (TOCHI), 2, 3, pp. 239-261, (1995)
[20]  
Hailpern J., Karahalios K., Halle J., Creating a spoken impact: encouraging vocalization through audio visual feedback in children with ASD, Proceedings of the SIGCHI conference on human factors in computing systems., pp. 453-462, (2009)