Motion Clustering-based Action Recognition Technique Using Optical Flow

被引:0
|
作者
Mahbub, Upal [1 ]
Imtiaz, Hafiz [1 ]
Ahad, Md. Atiqur Rahman [2 ]
机构
[1] Bangladesh Univ Engn & Technol, Dhaka 1000, Bangladesh
[2] Univ Dhaka, Dhaka, Bangladesh
来源
2012 INTERNATIONAL CONFERENCE ON INFORMATICS, ELECTRONICS & VISION (ICIEV) | 2012年
关键词
Motion-based Representation; Action Recognition; Optical Flow; RANSAC; SVM;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A new technique for action clustering-based human action representation on the basis of optical flow analysis and random sample consensus (RANSAC) method is proposed in this paper. The apparent motion of the human subject with respect to the background is detected using optical flow analysis, while the RANSAC algorithm is used to filter out unwanted interested points. From the remaining key interest points, the human subject is localized and the rectangular area surrounding the human body is segmented both horizontally and vertically. Next, the percentage of change of interest points at every small blocks at the intersections of horizontal and vertical segments from frame to frame are accumulated in matrix form for different persons performing the same action. An average of all these matrices is used as a feature vector for that particular action. In addition, the change in the position of the person along X-axis and Y-axis are cumulated for an action and included in the feature vectors. For the purpose of recognition using the extracted feature vectors, a distance-based similarity measure and a support vector machine (SVM)-based classifiers have been exploited. From extensive experimentations upon benchmark motion databases, it is found that the proposed method offers not only a very high degree of accuracy but also computational savings.
引用
收藏
页码:919 / 924
页数:6
相关论文
共 50 条
  • [41] Human action recognition using an optical flow-gated recurrent neural network
    Giveki, Davar
    INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2024, 13 (03)
  • [42] Motion-based video fusion using optical flow information
    Li, Jian
    Nikolov, Stavri G.
    Benton, Christopher P.
    Scott-Samuel, Nicholas E.
    2006 9TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION, VOLS 1-4, 2006, : 1786 - 1793
  • [43] Human Action Recognition Using Dominant Motion Pattern
    Mukherjee, Snehasis
    Mallik, Apurbaa
    Mukherjee, Dipti Prasad
    COMPUTER VISION SYSTEMS (ICVS 2015), 2015, 9163 : 477 - 487
  • [44] Estimating contrast agent motion from ultrasound images using an anisotropic diffusion-based optical flow technique
    Lee, Ju Hwan
    Kim, Sung Min
    COMPUTERS IN BIOLOGY AND MEDICINE, 2013, 43 (11) : 1853 - 1862
  • [45] Optical Flow guided Motion Template for Hand Gesture Recognition
    Sarma, Debajit
    Bhuyan, M. K.
    PROCEEDINGS OF 2020 IEEE APPLIED SIGNAL PROCESSING CONFERENCE (ASPCON 2020), 2020, : 262 - 266
  • [46] Target Motion Compensation with Optical Flow Clustering during Visual Tracking
    Huang, Cheng-Ming
    Hung, Ming-Hong
    2014 IEEE 11TH INTERNATIONAL CONFERENCE ON NETWORKING, SENSING AND CONTROL (ICNSC), 2014, : 96 - 101
  • [47] Motion Flow Segmentation Based on Optical Flow Angle
    Yang, Kun
    Wang, Lin
    Feng, Fu-Jian
    Yu, Jiang-Hao
    Cheng, Yuan-Fei
    PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATION AND SENSOR NETWORKS (WCSN 2016), 2016, 44 : 592 - 597
  • [48] Human actions recognition using bag of optical flow words
    Zhang, Xu
    Miao, Zhenjiang
    Wan, Lili
    FOURTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2012), 2012, 8334
  • [49] Human Activity Recognition using Optical Flow based Feature Set
    Kumar, S. Santhosh
    John, Mala
    2016 IEEE INTERNATIONAL CARNAHAN CONFERENCE ON SECURITY TECHNOLOGY (ICCST), 2016, : 138 - 142
  • [50] Sequential-Based Action Recognition Technique Based on Homography of Interested SIFT Keypoints
    EL-Henawy, I. M.
    Mahmoud, Hamdi A.
    Ahmed, Kareem
    PROCEEDINGS OF 2016 11TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING & SYSTEMS (ICCES), 2016, : 161 - 166