RETRACTED: A Fused Heterogeneous Deep Neural Network and Robust Feature Selection Framework for Human Actions Recognition (Retracted Article)

被引:22
作者
Khan, Muhammad Attique [1 ]
Zhang, Yu-Dong [2 ]
Alhusseni, Majed [3 ]
Kadry, Seifedine [4 ]
Wang, Shui-Hua [5 ]
Saba, Tanzila [6 ]
Iqbal, Tassawar [7 ]
机构
[1] HITEC Univ, Dept Comp Sci, Museum Rd, Taxila, Pakistan
[2] Univ Leicester, Dept Informat, Leicester LE1 7RH, Leics, England
[3] Univ Hail, Coll Comp Sci & Engn, Hail, Saudi Arabia
[4] Noroff Univ Coll, Dept Appl Data Sci, Noroff Oslo, Norway
[5] Univ Leicester, Dept Math, Leicester LE1 7RH, Leics, England
[6] Prince Sultan Univ, Coll Comp & Informat Sci, Riyadh, Saudi Arabia
[7] COMSATS Univ Islamabad, Dept Comp Sci, Wah Campus, Islamabad, Pakistan
关键词
Action recognition; Silhouette extraction; Shape features; Deep features; Feature selection; Feature fusion; FUSION; CLASSIFICATION; BAG;
D O I
10.1007/s13369-021-05881-4
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
In the area of computer vision (CV), action recognition is a hot topic of research nowadays due to famous applications, which include human-machine interaction, robotics, visual surveillance, video analysis, etc. Many techniques are presented in the literature by researchers of CV, but still they faced a lot of challenges such as complexity in the background, variation in the camera view point and movement of humans. A new method is proposed in this work for action recognition. The proposed method is based on the shape and deep learning features fusion. Two-steps-based method is executed- human extraction to action recognition. In the first step, first, humans are extracted by simple learning process. In this process, HOG features are extracted from few selected datasets such as INRIA, CAVIAR, Weizmann and KTH. Then, we need to select the robust features using entropy-controlled LSVM maximization and performed detection. Second, geometric features are extracted from detected regions and parallel deep learning features are extracted from original video frame. However, the extracted deep learning features are high in dimension and some are not relevant, so it is essential to remove irrelevant features before fusion. For this purpose, a new feature reduction technique is presented named as entropy-controlled geometric mean . Through this technique, we can select the robust deep learning features and remove the irrelevant of them. Finally, both types of features (selected deep learning and original geometric) are fused by proposed parallel conditional entropy approach. The obtained feature vector is classified by a cubic multi-class SVM. Six datasets (i.e., IXMAS, KTH, Weizmann, UCF Sports, UT Interaction and WVU) are used for the experimental process and achieved an average accuracy of above 98.00%. The detailed statistical analysis and comparison with existing techniques show the the effectiveness of proposed method .
引用
收藏
页码:2609 / 2609
页数:1
相关论文
共 63 条
  • [51] Tahir MB, MICROPROCESS MICROSY
  • [52] Teng, 2017, HUMAN DETECTION VIDE, V2
  • [53] DM-CTSA: a discriminative multi-focused and complementary temporal/spatial attention framework for action recognition
    Tong, Ming
    Yan, Kaibo
    Jin, Lei
    Yue, Xing
    Li, Mingyang
    [J]. NEURAL COMPUTING & APPLICATIONS, 2021, 33 (15) : 9375 - 9389
  • [54] Human Action Recognition Using Adaptive Local Motion Descriptor in Spark
    Uddin, M. D. Azher
    Joolee, Joolekha Bibi
    Alam, Aftab
    Lee, Young-Koo
    [J]. IEEE ACCESS, 2017, 5 : 21157 - 21167
  • [55] On Space-Time Filtering Framework for Matching Human Actions Across Different Viewpoints
    Ulhaq, Anwaar
    Yin, Xiaoxia
    He, Jing
    Zhang, Yanchun
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (03) : 1230 - 1242
  • [56] Efficient activity recognition using lightweight CNN and DS-GRU network for surveillance applications
    Ullah, Amin
    Muhammad, Khan
    Ding, Weiping
    Palade, Vasile
    Ul Haq, Ijaz
    Baik, Sung Wook
    [J]. APPLIED SOFT COMPUTING, 2021, 103
  • [57] Long-Term Temporal Convolutions for Action Recognition
    Varol, Gul
    Laptev, Ivan
    Schmid, Cordelia
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (06) : 1510 - 1517
  • [58] Action Recognition Using Nonnegative Action Component Representation and Sparse Basis Selection
    Wang, Haoran
    Yuan, Chunfeng
    Hu, Weiming
    Ling, Haibin
    Yang, Wankou
    Sun, Changyin
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2014, 23 (02) : 570 - 581
  • [59] Transferable two-stream convolutional neural network for human action recognition
    Xiong, Qianqian
    Zhang, Jianjing
    Wang, Peng
    Liu, Dongdong
    Gao, Robert X.
    [J]. JOURNAL OF MANUFACTURING SYSTEMS, 2020, 56 : 605 - 614
  • [60] Pedestrian identification using motion-controlled deep neural network in real-time visual surveillance
    Zahid, Muhammad
    Khan, Muhammad Attique
    Azam, Faisal
    Sharif, Muhammad
    Kadry, Seifedine
    Mohanty, Jnyana Ranjan
    [J]. SOFT COMPUTING, 2023, 27 (01) : 453 - 469