RETRACTED: A Fused Heterogeneous Deep Neural Network and Robust Feature Selection Framework for Human Actions Recognition (Retracted Article)

被引:22
作者
Khan, Muhammad Attique [1 ]
Zhang, Yu-Dong [2 ]
Alhusseni, Majed [3 ]
Kadry, Seifedine [4 ]
Wang, Shui-Hua [5 ]
Saba, Tanzila [6 ]
Iqbal, Tassawar [7 ]
机构
[1] HITEC Univ, Dept Comp Sci, Museum Rd, Taxila, Pakistan
[2] Univ Leicester, Dept Informat, Leicester LE1 7RH, Leics, England
[3] Univ Hail, Coll Comp Sci & Engn, Hail, Saudi Arabia
[4] Noroff Univ Coll, Dept Appl Data Sci, Noroff Oslo, Norway
[5] Univ Leicester, Dept Math, Leicester LE1 7RH, Leics, England
[6] Prince Sultan Univ, Coll Comp & Informat Sci, Riyadh, Saudi Arabia
[7] COMSATS Univ Islamabad, Dept Comp Sci, Wah Campus, Islamabad, Pakistan
关键词
Action recognition; Silhouette extraction; Shape features; Deep features; Feature selection; Feature fusion; FUSION; CLASSIFICATION; BAG;
D O I
10.1007/s13369-021-05881-4
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
In the area of computer vision (CV), action recognition is a hot topic of research nowadays due to famous applications, which include human-machine interaction, robotics, visual surveillance, video analysis, etc. Many techniques are presented in the literature by researchers of CV, but still they faced a lot of challenges such as complexity in the background, variation in the camera view point and movement of humans. A new method is proposed in this work for action recognition. The proposed method is based on the shape and deep learning features fusion. Two-steps-based method is executed- human extraction to action recognition. In the first step, first, humans are extracted by simple learning process. In this process, HOG features are extracted from few selected datasets such as INRIA, CAVIAR, Weizmann and KTH. Then, we need to select the robust features using entropy-controlled LSVM maximization and performed detection. Second, geometric features are extracted from detected regions and parallel deep learning features are extracted from original video frame. However, the extracted deep learning features are high in dimension and some are not relevant, so it is essential to remove irrelevant features before fusion. For this purpose, a new feature reduction technique is presented named as entropy-controlled geometric mean . Through this technique, we can select the robust deep learning features and remove the irrelevant of them. Finally, both types of features (selected deep learning and original geometric) are fused by proposed parallel conditional entropy approach. The obtained feature vector is classified by a cubic multi-class SVM. Six datasets (i.e., IXMAS, KTH, Weizmann, UCF Sports, UT Interaction and WVU) are used for the experimental process and achieved an average accuracy of above 98.00%. The detailed statistical analysis and comparison with existing techniques show the the effectiveness of proposed method .
引用
收藏
页码:2609 / 2609
页数:1
相关论文
共 63 条
  • [1] A framework of human action recognition using length control features fusion and weighted entropy-variances based feature selection
    Afza, Farhat
    Khan, Muhammad Attique
    Sharif, Muhammad
    Kadry, Seifedine
    Manogaran, Gunasekaran
    Saba, Tanzila
    Ashraf, Imran
    Damasevicius, Robertas
    [J]. IMAGE AND VISION COMPUTING, 2021, 106
  • [2] Pedestrian Detection for UAVs Using Cascade Classifiers and Saliency Maps
    Aguilar, Wilbert G.
    Luna, Marco A.
    Moya, Julio F.
    Abad, Vanessa
    Ruiz, Hugo
    Parra, Humberto
    Angulo, Cecilio
    [J]. ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2017, PT II, 2017, 10306 : 563 - 574
  • [3] Pedestrian detection for UAVs using cascade classifiers with Meanshift
    Aguilar, Wilbert G.
    Luna, Marco A.
    Moya, Julio F.
    Abad, Vanessa
    Parra, Humberto
    Ruiz, Hugo
    [J]. 2017 11TH IEEE INTERNATIONAL CONFERENCE ON SEMANTIC COMPUTING (ICSC), 2017, : 509 - 514
  • [4] On Temporal Order Invariance for View-Invariant Action Recognition
    Anwaar-ul-Haq
    Gondal, Iqbal
    Murshed, Manzur
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2013, 23 (02) : 203 - 211
  • [5] Particle swarm optimization with deep learning for human action recognition
    Berlin, S. Jeba
    John, Mala
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (25-26) : 17349 - 17371
  • [6] Deep Learning for Tomato Diseases: Classification and Symptoms Visualization
    Brahimi, Mohammed
    Boukhalfa, Kamel
    Moussaoui, Abdelouahab
    [J]. APPLIED ARTIFICIAL INTELLIGENCE, 2017, 31 (04) : 299 - 315
  • [7] A survey of video datasets for human action and activity recognition
    Chaquet, Jose M.
    Carmona, Enrique J.
    Fernandez-Caballero, Antonio
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2013, 117 (06) : 633 - 659
  • [8] Action recognition from depth sequences using weighted fusion of 2D and 3D auto-correlation of gradients features
    Chen, Chen
    Zhang, Baochang
    Hou, Zhenjie
    Jiang, Junjun
    Liu, Mengyuan
    Yang, Yun
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (03) : 4651 - 4669
  • [9] A survey of depth and inertial sensor fusion for human action recognition
    Chen, Chen
    Jafari, Roozbeh
    Kehtarnavaz, Nasser
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2017, 76 (03) : 4405 - 4425
  • [10] Compositional interaction descriptor for human interaction recognition
    Cho, Nam-Gyu
    Park, Se-Ho
    Park, Jeong-Seon
    Park, Unsang
    Lee, Seong-Whan
    [J]. NEUROCOMPUTING, 2017, 267 : 169 - 181