Human Action Recognition Using Key Points Displacement

被引:0
作者
Lai, Kuan-Ting [1 ,2 ]
Hsieh, Chaur-Heh [3 ]
Lai, Mao-Fu [4 ]
Chen, Ming-Syan [1 ,2 ]
机构
[1] Acad Sinica, Res Ctr Informat Technol Innovat, Taipei, Taiwan
[2] Natl Taiwan Univ, Taipei, Taiwan
[3] Ming-Chuan Univ, Taoyuan, Taiwan
[4] Tungnan Univ, Taipei, Taiwan
来源
IMAGE AND SIGNAL PROCESSING, PROCEEDINGS | 2010年 / 6134卷
关键词
SIFT; Action Recognition; Optical Flow; Space-time-interest-points; SVM;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recognizing human actions is currently one of the most active research topics. Efros et al. first proposed using optical flow and normalized correlation to recognize distant actions. One weakness of the method is that optical flow is too noisy and cannot reveal the true motions; the other popular method is the space-time-interest-points proposed by Laptev et al., who extended the Harris corner detector to temporal domain. Inspired by the two methods, we proposed a new algorithm based on displacement of Lowe's scale-invariant key points to detect motions. The vectors of matched key points are calculated as weighted orientation histograms and then classified by SVM. Experimental results demonstrate that the proposed motion descriptor is effective on recognizing both general and sport actions.
引用
收藏
页码:439 / +
页数:3
相关论文
共 50 条
  • [31] Human Action Recognition Using Global Point Feature Histograms and Action Shapes
    Rusu, Radu Bogdan
    Bandouch, Jan
    Meier, Franziska
    Essa, Irfan
    Beetz, Michael
    ADVANCED ROBOTICS, 2009, 23 (14) : 1873 - 1908
  • [32] Human action recognition using a fast learning fully complex-valued classifier
    Babu, R. Venkatesh
    Suresh, S.
    Savitha, R.
    NEUROCOMPUTING, 2012, 89 : 202 - 212
  • [33] TRAJECTORY FEATURE FUSION FOR HUMAN ACTION RECOGNITION
    Megrhi, Sameh
    Beghdadi, Azeddine
    Souidene, Wided
    2014 5TH EUROPEAN WORKSHOP ON VISUAL INFORMATION PROCESSING (EUVIP 2014), 2014,
  • [34] TUHAD: Taekwondo Unit Technique Human Action Dataset with Key Frame-Based CNN Action Recognition
    Lee, Jinkue
    Jung, Hoeryong
    SENSORS, 2020, 20 (17) : 1 - 20
  • [35] A method for action recognition based on pose and interest points
    Lu, Lu
    Zhan, Yi-Ju
    Jiang, Qing
    Cai, Qing-ling
    MULTIMEDIA TOOLS AND APPLICATIONS, 2015, 74 (15) : 6091 - 6109
  • [36] Human Action Recognition using Late Fusion and Dimensionality Reduction
    Xu, Haiyan
    Tian, Qian
    Wang, Zhen
    Wu, Jianhui
    2014 19TH INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP), 2014, : 63 - 67
  • [37] Robust Human Action Recognition Using Dynamic Movement Features
    Zhang, Huiwen
    Fu, Mingliang
    Luo, Haitao
    Zhou, Weijia
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2017, PT I, 2017, 10462 : 474 - 484
  • [38] Human Action Recognition Using Spatial and Temporal Sequences Alignment
    Li, Yandi
    Zhao, Zhihao
    SECOND INTERNATIONAL CONFERENCE ON OPTICS AND IMAGE PROCESSING (ICOIP 2022), 2022, 12328
  • [39] Human Action Recognition using Transfer Learning with Deep Representations
    Sargano, Allah Bux
    Wang, Xiaofeng
    Angelov, Plamen
    Habib, Zulfiqar
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 463 - 469
  • [40] HUMAN ACTION RECOGNITION USING ASSOCIATED DEPTH AND SKELETON INFORMATION
    Tang, Nick C.
    Lin, Yen-Yu
    Hua, Ju-Hsuan
    Weng, Ming-Fang
    Liao, Hong-Yuan Mark
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,