Implementation of Computer Aided Dance Teaching Integrating Human Model Reconstruction Technology

被引:0
作者
Zhao Y. [1 ]
Yang H. [2 ]
机构
[1] Department of Dance, Zhengzhou Preschool Education College, Zhengzhou
[2] Academy of Fine Arts, Henan University, Henan, Kaifeng
来源
Computer-Aided Design and Applications | 2024年 / 21卷 / S10期
关键词
Computer-Aided Instruction; Dance Movement Recognition; Mixed Feature; Reconstruction of Human Model;
D O I
10.14733/cadaps.2024.S10.196-210
中图分类号
学科分类号
摘要
In the stage of dance teaching and training, advanced science and technology can lay a technical foundation for it, and at the same time give dance art digital characteristics. Dance CAI system meets the requirements of digital dance and provides a convenient visual mode for dance educators and learners in the form of full perspective. In order to achieve better artistic performance, dance computer-aided instruction (CAI) system can be introduced in time. In this article, the reconstruction technology of human motion model based on mixed features is proposed, and the dance motion recognition model is constructed by combining the convolution operation principle. The human body is allowed to use a variety of body language to express itself and communicate with the computer. The early fusion modal information usually has different characteristics, which leads to the inconsistency of the spatial and temporal dimensions of the extracted feature vectors. The method in this article has high accuracy and low error, and can help optimize the dance teaching effect. © 2024 U-turn Press LLC.
引用
收藏
页码:196 / 210
页数:14
相关论文
共 19 条
  • [1] Adams S.-O., Onwadi R.-U., An empirical comparison of computer-assisted instruction and field trip instructional methods on teaching of basic science and technology curriculum in Nigeria, International Journal of Social Sciences and Educational Studies, 7, 4, pp. 22-35, (2020)
  • [2] Angermann C., Schwab M., Haltmeier M., Laubichler C., Jonsson S., Unsupervised single-shot depth estimation using perceptual reconstruction, Machine Vision and Applications, 34, 5, (2023)
  • [3] Guo C., Zuo X., Wang S., Liu X., Zou S., Gong M., Cheng L., Action2video: generating videos of human 3d actions, International Journal of Computer Vision, 130, 2, pp. 285-315, (2022)
  • [4] Liu R., Shao Q., Wang S., Ru C., Balkcom D., Zhou X., Reconstructing human joint motion with computational fabrics, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 3, 1, pp. 1-26, (2019)
  • [5] Ma G., Hao Z., Wu X., Wang X., An optimal electrical impedance tomography drive pattern for human-computer interaction applications, IEEE Transactions on Biomedical Circuits and Systems, 14, 3, pp. 402-411, (2020)
  • [6] Padilla P.-L., Mericli A.-F., Largo R.-D., Garvey P. B., Computer-aided design and manufacturing versus conventional surgical planning for head and neck reconstruction: a systematic review and meta-analysis, Plastic and Reconstructive Surgery, 148, 1, pp. 183-192, (2021)
  • [7] Shi M., Aberman K., Aristidou A., Komura T., Lischinski D., Cohen-Or D., Chen B., Motionet: 3d human motion reconstruction from monocular video with skeleton consistency, ACM Transactions on Graphics (TOG), 40, 1, pp. 1-15, (2020)
  • [8] Song Y., Jin T., Dai Y., Song Y., Zhou X., Through-wall human pose reconstruction via UWB MIMO radar and 3D CNN, Remote Sensing, 13, 2, (2021)
  • [9] Su Z., Xu L., Zhong D., Li Z., Deng F., Quan S., Fang L., Robustfusion: Robust volumetric performance reconstruction under human-object interactions from monocular rgbd stream, IEEE Transactions on Pattern Analysis and Machine Intelligence, 45, 5, pp. 6196-6213, (2022)
  • [10] Sun Y.-T., Fu Q.-C., Jiang Y.-R., Liu Z., Lai Y.-K., Fu H., Gao L., Human motion transfer with 3d constraints and detail enhancement, IEEE Transactions on Pattern Analysis and Machine Intelligence, 45, 4, pp. 4682-4693, (2022)