An Efficient Immersive Self-Training System for Hip-Hop Dance Performance with Automatic Evaluation Features

被引:2
作者
Esaki, Kazuhiro [1 ]
Nagao, Katashi [1 ]
机构
[1] Nagoya Univ, Grad Sch Informat, Nagoya 4648603, Japan
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 14期
关键词
virtual reality; dance training; automatic evaluation; deep learning; contrastive learning; MOTION CAPTURE; EMOTIONS;
D O I
10.3390/app14145981
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Featured Application Virtual Reality Simulation and Training for Dance Performance Improvement.Abstract As a significant form of physical expression, dance demands ongoing training for skill enhancement, particularly in expressiveness. However, such training often faces restrictions related to location and time. Moreover, the evaluation of dance performance tends to be subjective, which necessitates the development of effective training methods and objective evaluation techniques. In this research, we introduce a self-training system for dance that employs VR technology to create an immersive training environment that facilitates a comprehensive understanding of three-dimensional dance movements. Furthermore, the system incorporates markerless motion capture technology to accurately record dancers' movements in real time and translate them into the VR avatar. Additionally, the use of deep learning enables multi-perspective dance performance assessment, providing feedback to users to aid their repetitive practice. To enable deep learning-based dance evaluations, we established a dataset that incorporates data from beginner-level dances along with expert evaluations of those dances. This dataset was specifically curated for practitioners in a dance studio setting by using a total of four cameras to record dances. Expert annotations were obtained from various perspectives to provide a comprehensive evaluation. This study also proposes three unique automatic evaluation models. A comparative analysis of the models, particularly contrastive learning (and autoencoder)-based expression learning and a reference-guided model (where a model dancer's performance serves as a reference), revealed that the reference-guided model achieved superior accuracy. The proposed method was able to predict dance performance ratings with an accuracy of approximately +/- 1 point on a 10-point scale, compared to ratings by professional coaches. Our findings open up novel possibilities for future dance training and evaluation systems.
引用
收藏
页数:30
相关论文
共 62 条
[11]   Exploring Simple Siamese Representation Learning [J].
Chen, Xinlei ;
He, Kaiming .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :15745-15753
[12]   Dance Self-learning Application and Its Dance Pose Evaluations [J].
Choi, Jong-Hyeok ;
Lee, Jae-Jun ;
Nasridinov, Aziz .
36TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2021, 2021, :1037-1045
[14]   An Evaluation of Virtual Training for Teaching Dance Instructors to Implement a Behavioral Coaching Package [J].
Davis, Sarah ;
Thomson, Kendra M. ;
Zonneveld, Kimberley L. M. ;
Vause, Tricia C. ;
Passalent, Melina ;
Bajcar, Nicole ;
Sureshkumar, Brittney .
BEHAVIOR ANALYSIS IN PRACTICE, 2023, 16 (04) :1100-1112
[15]  
Deb Suman, 2018, International Journal of Computational, V2, P4
[16]   A review of 3D human pose estimation algorithms for markerless motion capture [J].
Desmarais, Yann ;
Mottet, Denis ;
Slangen, Pierre ;
Montesinos, Philippe .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2021, 212
[17]   VR Dance Training System Capable of Human Motion Tracking and Automatic Dance Evaluation [J].
Esaki, Kazuhiro ;
Nagao, Katashi .
PRESENCE-VIRTUAL AND AUGMENTED REALITY, 2022, 31 :23-45
[18]  
Geng X, 2016, Arxiv, DOI [arXiv:1408.6027, DOI 10.1109/TKDE.2016.2545658]
[19]   Virtual Reality as an Educational and Training Tool for Medicine [J].
Gonzalez Izard, Santiago ;
Juanes, Juan A. ;
Garcia Penalvo, Francisco J. ;
Goncalvez Estella, Jesus Ma ;
Sanchez Ledesma, Ma Jose ;
Ruisoto, Pablo .
JOURNAL OF MEDICAL SYSTEMS, 2018, 42 (03)
[20]  
Grishchenko I, 2022, Arxiv, DOI [arXiv:2206.11678, 10.48550/arXiv.2206.11678]