A multi-modal dance corpus for research into interaction between humans in virtual environments

被引:0
作者
Slim Essid
Xinyu Lin
Marc Gowing
Georgios Kordelas
Anil Aksay
Philip Kelly
Thomas Fillon
Qianni Zhang
Alfred Dielmann
Vlado Kitanovski
Robin Tournemenne
Aymeric Masurelle
Ebroul Izquierdo
Noel E. O’Connor
Petros Daras
Gaël Richard
机构
[1] Institut Telecom/Telecom ParisTech,Multimedia and Vision Group (MMV)
[2] CNRS-LTCI,CLARITY, Centre for Sensor Web Technologies
[3] Queen Mary University,Centre for Research and Technology
[4] Dublin City University,Hellas
[5] Informatics and Telematics Institute,undefined
来源
Journal on Multimodal User Interfaces | 2013年 / 7卷
关键词
Dance; Multimodal data; Multiview video processing; Audio; Depth maps; Motion; Inertial sensors; Synchronisation; Activity recognition; Virtual reality; Computer vision; Machine listening;
D O I
暂无
中图分类号
学科分类号
摘要
We present a new, freely available, multimodal corpus for research into, amongst other areas, real-time realistic interaction between humans in online virtual environments. The specific corpus scenario focuses on an online dance class application scenario where students, with avatars driven by whatever 3D capture technology is locally available to them, can learn choreographies with teacher guidance in an online virtual dance studio. As the dance corpus is focused on this scenario, it consists of student/teacher dance choreographies concurrently captured at two different sites using a variety of media modalities, including synchronised audio rigs, multiple cameras, wearable inertial measurement devices and depth sensors. In the corpus, each of the several dancers performs a number of fixed choreographies, which are graded according to a number of specific evaluation criteria. In addition, ground-truth dance choreography annotations are provided. Furthermore, for unsynchronised sensor modalities, the corpus also includes distinctive events for data stream synchronisation. The total duration of the recorded content is 1 h and 40 min for each single sensor, amounting to 55 h of recordings across all sensors. Although the dance corpus is tailored specifically for an online dance class application scenario, the data is free to download and use for any research and development purposes.
引用
收藏
页码:157 / 170
页数:13
相关论文
共 17 条
[1]  
Eichner M(2012)2D articulated human pose estimation and retrieval in (almost) unconstrained still images Int J Comput Vis 99 190-214
[2]  
Marin-Jimenez M(2012)Multi-view 3d human pose estimation in complex environment IJCV 96 103-124
[3]  
Zisserman A(2010)Advances in view-invariant human motion analysis: a review SMC Part C 40 13-24
[4]  
Ferrari V(2012)Recognizing multiple human activities and tracking full-body pose in unconstrained environments Pattern Recognit 45 11-23
[5]  
Hofmann M(2010)Humaneva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion Int J Comput Vis 87 4-27
[6]  
Gavrila D(2006)Free viewpoint action recognition using motion history volumes CVIU 104 249-257
[7]  
Ji X(undefined)undefined undefined undefined undefined-undefined
[8]  
Liu H(undefined)undefined undefined undefined undefined-undefined
[9]  
Schwarz L(undefined)undefined undefined undefined undefined-undefined
[10]  
Mateus D(undefined)undefined undefined undefined undefined-undefined