Acoustically Aware Robots: Detecting and evaluating sounds robots make and hear

被引:1
作者
Goedicke, David [1 ]
Tennent, Hamish
Moore, Dylan [2 ]
Ju, Wendy [1 ]
机构
[1] Cornell Tech, New York, NY 10044 USA
[2] Stanford Univ, Stanford, CA 94305 USA
来源
HRI '21: COMPANION OF THE 2021 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION | 2021年
关键词
D O I
10.1145/3434074.3444876
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The sound a robot or automated system makes and the sounds it listens for in our shared acoustic environment can greatly expand its contextual understanding and to shape its behaviors to the interactions it is trying to perform. People convey significant information with sound in interpersonal communication in social contexts. Para-linguistic information about where we are, how loud we're speaking, or if we sound happy, sad or upset are relevant to understand for a robot that looks to adapt its interactions to be socially appropriate. Similarly, the qualities of the sound an object makes can change how people perceive that object and can alter whether or not it attracts attention, interrupts other interactions, reinforces or contradicts an emotional expression, and as such should be aligned with the designer's intention for the object. In this tutorial, we will introduce the participants to software and design methods to help robots recognize and generate sound for human-robot interaction (HRI). Using open-source tools and methods designers can apply to their own robots, we seek to increase the application of sound to robot design and stimulate HRI research in robot sound.
引用
收藏
页码:697 / 699
页数:3
相关论文
共 10 条
[1]  
Ju Wendy, 2020, NEUR NETS MUS
[2]  
Langeveld Lau, 2013, ADV IND DESIGN ENG, V47
[3]  
Martelaro N, 2020, CHI'20: EXTENDED ABSTRACTS OF THE 2020 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, DOI [10.5152/eurjrheum.2019.19123, 10.1145/3334480.3375052]
[4]   The Interaction Engine: Tools for Prototyping Connected Devices [J].
Martelaro, Nikolas ;
Shiloh, Michael ;
Ju, Wendy .
PROCEEDINGS OF THE TENTH ANNIVERSARY CONFERENCE ON TANGIBLE EMBEDDED AND EMBODIED INTERACTION (TEI16), 2016, :762-765
[5]   Unintended Consonances: Methods to Understand Robot Motor Sound Perception [J].
Moore, Dylan ;
Dahl, Tobias ;
Varela, Paula ;
Ju, Wendy ;
Naes, Tormod ;
Berget, Ingunn .
CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
[6]   Making Noise Intentional: A Study of Servo Sound Perception [J].
Moore, Dylan ;
Tennent, Hamish ;
Martelaro, Nikolas ;
Ju, Wendy .
PROCEEDINGS OF THE 2017 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'17), 2017, :12-21
[7]   A Dataset and Taxonomy for Urban Sound Research [J].
Salamon, Justin ;
Jacoby, Christopher ;
Bello, Juan Pablo .
PROCEEDINGS OF THE 2014 ACM CONFERENCE ON MULTIMEDIA (MM'14), 2014, :1041-1044
[8]   Image Classification Using TensorFlow [J].
Seetala, Kiran ;
Birdsong, William ;
Reddy, Yenumula B. .
16TH INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY-NEW GENERATIONS (ITNG 2019), 2019, 800 :485-488
[9]  
Tennent H, 2017, IEEE ROMAN, P928, DOI 10.1109/ROMAN.2017.8172414
[10]  
Tunnermann Rene, 2013, Blended sonification-Sonification for casual information interaction