Automated gesture segmentation from dance sequences

被引:58
作者
Kahol, K [1 ]
Tripathi, P [1 ]
Panchanathan, S [1 ]
机构
[1] Arizona State Univ, Ctr Cognit Ubiquitous Comp, Dept Comp Sci & Engn, CUbiC, Tempe, AZ 85284 USA
来源
SIXTH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, PROCEEDINGS | 2004年
关键词
D O I
10.1109/AFGR.2004.1301645
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Complex human motion (e.g. dance) sequences are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one choreographer to another. Dance sequences also exhibit a large vocabulary of gestures. In this paper, we propose an algorithm called Hierarchical Activity Segmentation. This algorithm employs a dynamic hierarchical layered structure to represent human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a naive Bayesian classifier to derive choreographer profiles from empirical data that are used to predict how particular choreographers will segment gestures in other motion sequences. When the predictions were tested with a library of 45 3D motion capture sequences (with 185 distinct gestures) created by 5 different choreographers, they were found to be 93.3% accurate.
引用
收藏
页码:883 / 888
页数:6
相关论文
共 8 条
[1]  
AGGARWAL JK, 1997, P NONR ART MOT WORKS
[2]  
Efron D., 1941, GESTURE ENV
[3]  
KAHOL K, 2003, P IEEE INT C IM PROC
[4]  
Kendon A., 1988, GESTURES CAN BECOME, P131
[5]  
KROEMER KHE, 1933, ERGONOMICS DESIGN EA
[6]  
THIRUMALAI M, 2001, INTRO NATYA SHASTRA
[7]  
Wang TS, 2001, LECT NOTES COMPUT SC, V2195, P174
[8]  
Zhao Liwei, 2001, TECHNICAL REPORTS CI