Challenges and design considerations for multimodal asynchronous collaboration in VR

被引:22
作者
Chow K. [1 ]
Coyiuto C. [1 ]
Nguyen C. [2 ]
Yoon D. [1 ]
机构
[1] University of British Columbia, Vancouver, BC
[2] Adobe Research, San Francisco, 94103, CA
基金
加拿大自然科学与工程研究理事会;
关键词
3D; Asynchronous collaboration; Body language; Gesture; Immersion; Multimodal recording; Pointing; Presence; Proxemics; Spatial task; Speech; Virtual reality;
D O I
10.1145/3359142
中图分类号
学科分类号
摘要
Studies on collaborative virtual environments (CVEs) have suggested capture and later replay of multimodal interactions (e.g., speech, body language, and scene manipulations), which we refer to as multimodal recordings, as an effective medium for time-distributed collaborators to discuss and review 3D content in an immersive, expressive, and asynchronous way. However, there exist gaps of empirical knowledge in understanding how this multimodal asynchronous VR collaboration (MAVRC) context impacts social behaviors in mediated-communication, workspace awareness in cooperative work, and user requirements for authoring and consuming multimedia recording. This study aims to address these gaps by conceptualizing MAVRC as a type of CSCW and by understanding the challenges and design considerations of MAVRC systems. To this end, we conducted an exploratory need-finding study where participants (N = 15) used an experimental MAVRC system to complete a representative spatial task in an asynchronously collaborative setting, involving both consumption and production of multimodal recordings. Qualitative analysis of interview and observation data from the study revealed unique, core design challenges of MAVRC in: (1) coordinating proxemic behaviors between asynchronous collaborators, (2) providing traceability and change awareness across different versions of 3D scenes, (3) accommodating viewpoint control to maintain workspace awareness, and (4) supporting navigation and editing of multimodal recordings. We discuss design implications, ideate on potential design solutions, and conclude the paper with a set of design recommendations for MAVRC systems. © 2019 Copyright held by the owner/author(s).
引用
收藏
相关论文
共 88 条
[1]  
Arawjo I., Yoon D., Guimbretiere F., TypeTalker: A speech synthesis-based multi-modal commenting system, Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW’17), pp. 1970-1981, (2017)
[2]  
Avery B., Sandor C., Thomas B.H., Improving spatial perception for augmented reality X-ray vision, 2009 IEEE Virtual Reality Conference, pp. 79-82, (2009)
[3]  
Bach B., Pietriga E., Fekete J., Graphdiaries: Animated transitions and temporal navigation for dynamic networks, IEEE Transactions on Visualization and Computer Graphics, 20, 5, pp. 740-754, (2014)
[4]  
Bailenson J.N., Blascovich J., Beall A.C., Loomis J.M., Interpersonal distance in immersive virtual environments, Personality and Social Psychology Bulletin, 29, 7, pp. 819-833, (2003)
[5]  
Ballendat T., Marquardt N., Greenberg S., Proxemic interaction: Designing for a Proximity and Orientation-aware Environment, ACM International Conference on Interactive Tabletops and Surfaces (ITS’10), pp. 121-130, (2010)
[6]  
Bares W., McDermott S., Boudreaux C., Thainimit S., Virtual 3D camera composition from frame constraints, Proceedings of the Eighth ACM International Conference on Multimedia (MULTIMEDIA’00), pp. 177-186, (2000)
[7]  
Benford S., Greenhalgh C., Rodden T., Pycock J., Collaborative virtual environments, Commun. ACM, 44, 7, pp. 79-85, (2001)
[8]  
Berthouzoz F., Li W., Agrawala M., Tools for placing cuts and transitions in interview video, ACM Trans. Graph., 31, 4, (2012)
[9]  
Bezerianos A., Dragicevic P., Balakrishnan R., Mnemonic rendering: An image-based approach for exposing hidden changes in dynamic displays, Proceedings of the 19th Annual ACM Symposium on User Interface Software and Technology, pp. 159-168, (2006)
[10]  
Bobeshko A., The Future of Interior Design: Virtual Showrooms and Augmented Catalogs, (2017)