Implementation of a Virtual Training Simulator Based on 360°Multi-View Human Action Recognition

被引:39
作者
Kwon, Beom [1 ]
Kim, Junghwan [1 ]
Lee, Kyoungoh [1 ]
Lee, Yang Koo [2 ]
Park, Sangjoon [2 ]
Lee, Sanghoon [1 ]
机构
[1] Yonsei Univ, Dept Elect & Elect Engn, Seoul 120749, South Korea
[2] Elect & Telecommun Res Inst, Daejeon 305700, South Korea
来源
IEEE ACCESS | 2017年 / 5卷
关键词
Human action recognition; Kinect sensor; virtual training simulator; PHYSICAL-ACTIVITY RECOGNITION; SENSOR; SYSTEM;
D O I
10.1109/ACCESS.2017.2723039
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Virtual training has received a considerable amount of research attention in recent years due to its potential for use in a variety of applications, such as virtual military training, virtual emergency evacuation, and virtual firefighting. To provide a trainee with an interactive training environment, human action recognition methods have been introduced as a major component of virtual training simulators. Wearable motion capture suit-based human action recognition has been widely used for virtual training, although it may distract the trainee. In this paper, we present a virtual training simulator based on 360 degrees multi-view human action recognition using multiple Kinect sensors that provides an immersive environment for the trainee without the need to wear devices. To this end, the proposed simulator contains coordinate system transformation, front-view Kinect sensor tracking, multi-skeleton fusion, skeleton normalization, orientation compensation, feature extraction, and classifier modules. Virtual military training is presented as a potential application of the proposed simulator. To train and test it, a database consisting of 25 military training actions was constructed. In the test, the proposed simulator provided an excellent, natural training environment in terms of frame-by-frame classification accuracy, action-by-action classification accuracy, and observational latency.
引用
收藏
页码:12496 / 12511
页数:16
相关论文
共 65 条
  • [51] Stork A., 2012, 2012 18th International Conference on Virtual Systems and Multimedia (VSMM 2012). Proceedings, P347, DOI 10.1109/VSMM.2012.6365944
  • [52] Sturm J, 2012, IEEE INT C INT ROBOT, P573, DOI 10.1109/IROS.2012.6385773
  • [53] Tapia EM, 2007, ELEVENTH IEEE INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, PROCEEDINGS, P37
  • [54] Evaluation of Wearable Simulation Interface for Military Training
    Taylor, Grant S.
    Barnett, John S.
    [J]. HUMAN FACTORS, 2013, 55 (03) : 672 - 690
  • [55] A Robust and Efficient Video Representation for Action Recognition
    Wang, Heng
    Oneata, Dan
    Verbeek, Jakob
    Schmid, Cordelia
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2016, 119 (03) : 219 - 238
  • [56] Mining Actionlet Ensemble for Action Recognition with Depth Cameras
    Wang, Jiang
    Liu, Zicheng
    Wu, Ying
    Yuan, Junsong
    [J]. 2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2012, : 1290 - 1297
  • [57] Xia L., 2012, 2012 IEEE COMP SOC C, P20
  • [58] Yang A. Y.
  • [59] Recognizing Human Actions from Still Images with Latent Poses
    Yang, Weilong
    Wang, Yang
    Mori, Greg
    [J]. 2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, : 2030 - 2037
  • [60] Yao BP, 2011, IEEE I CONF COMP VIS, P1331, DOI 10.1109/ICCV.2011.6126386