SonifyAR: Context-Aware Sound Generation in Augmented Reality

被引:0
作者
Su, Xia [1 ,2 ]
Froehlich, Jon E. [1 ]
Koh, Eunyee [2 ]
Xiao, Chang [2 ]
机构
[1] Univ Washington, Seattle, WA 98195 USA
[2] Adobe Res, San Jose, CA USA
来源
PROCEEDINGS OF THE 37TH ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, USIT 2024 | 2024年
关键词
Mixed Reality; Sound; Augmented Reality; Authoring Tool; SONIFICATION;
D O I
10.1145/3654777.3676406
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sound plays a crucial role in enhancing user experience and immersiveness in Augmented Reality (AR). However, current platforms lack support for AR sound authoring due to limited interaction types, challenges in collecting and specifying context information, and difficulty in acquiring matching sound assets. We present SonifyAR, an LLM-based AR sound authoring system that generates context-aware sound effects for AR experiences. SonifyAR expands the current design space of AR sound and implements a Programming by Demonstration (PbD) pipeline to automatically collect contextual information of AR events, including virtual-content-semantics and real-world context. This context information is then processed by a large language model to acquire sound effects with Recommendation, Retrieval, Generation, and Transfer methods. To evaluate the usability and performance of our system, we conducted a user study with eight participants and created five example applications, including an AR-based science experiment, and an assistive application for low-vision AR users.
引用
收藏
页数:13
相关论文
共 65 条
[1]  
Adobe, 2023, Adobe Audition Sound Efects Download Page
[2]  
[Anonymous], 2023, Unreal Engine
[3]  
[Anonymous], Adobe Aero
[4]  
[Anonymous], Freesound
[5]  
Apple, Reality Composer
[6]  
Apple, 2023, Apple ARKit Documentation
[7]  
Apple, 2023, SceneKit-Physics Simulation
[8]  
apple, ARVid-Augmented Reality
[9]  
Apple, 2023, ARKit-Tracking and Visualizing Planes
[10]  
Chen SH, 2023, Arxiv, DOI arXiv:2305.18500