RETRACTED: A Fused Heterogeneous Deep Neural Network and Robust Feature Selection Framework for Human Actions Recognition (Retracted Article)

被引:22
作者
Khan, Muhammad Attique [1 ]
Zhang, Yu-Dong [2 ]
Alhusseni, Majed [3 ]
Kadry, Seifedine [4 ]
Wang, Shui-Hua [5 ]
Saba, Tanzila [6 ]
Iqbal, Tassawar [7 ]
机构
[1] HITEC Univ, Dept Comp Sci, Museum Rd, Taxila, Pakistan
[2] Univ Leicester, Dept Informat, Leicester LE1 7RH, Leics, England
[3] Univ Hail, Coll Comp Sci & Engn, Hail, Saudi Arabia
[4] Noroff Univ Coll, Dept Appl Data Sci, Noroff Oslo, Norway
[5] Univ Leicester, Dept Math, Leicester LE1 7RH, Leics, England
[6] Prince Sultan Univ, Coll Comp & Informat Sci, Riyadh, Saudi Arabia
[7] COMSATS Univ Islamabad, Dept Comp Sci, Wah Campus, Islamabad, Pakistan
关键词
Action recognition; Silhouette extraction; Shape features; Deep features; Feature selection; Feature fusion; FUSION; CLASSIFICATION; BAG;
D O I
10.1007/s13369-021-05881-4
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
In the area of computer vision (CV), action recognition is a hot topic of research nowadays due to famous applications, which include human-machine interaction, robotics, visual surveillance, video analysis, etc. Many techniques are presented in the literature by researchers of CV, but still they faced a lot of challenges such as complexity in the background, variation in the camera view point and movement of humans. A new method is proposed in this work for action recognition. The proposed method is based on the shape and deep learning features fusion. Two-steps-based method is executed- human extraction to action recognition. In the first step, first, humans are extracted by simple learning process. In this process, HOG features are extracted from few selected datasets such as INRIA, CAVIAR, Weizmann and KTH. Then, we need to select the robust features using entropy-controlled LSVM maximization and performed detection. Second, geometric features are extracted from detected regions and parallel deep learning features are extracted from original video frame. However, the extracted deep learning features are high in dimension and some are not relevant, so it is essential to remove irrelevant features before fusion. For this purpose, a new feature reduction technique is presented named as entropy-controlled geometric mean . Through this technique, we can select the robust deep learning features and remove the irrelevant of them. Finally, both types of features (selected deep learning and original geometric) are fused by proposed parallel conditional entropy approach. The obtained feature vector is classified by a cubic multi-class SVM. Six datasets (i.e., IXMAS, KTH, Weizmann, UCF Sports, UT Interaction and WVU) are used for the experimental process and achieved an average accuracy of above 98.00%. The detailed statistical analysis and comparison with existing techniques show the the effectiveness of proposed method .
引用
收藏
页码:2609 / 2609
页数:1
相关论文
共 63 条
[21]   Prediction of COVID-19-Pneumonia based on Selected Deep Features and One Class Kernel Extreme Learning Machine [J].
Khan, Muhammad Attique ;
Kadry, Seifedine ;
Zhang, Yu-Dong ;
Akram, Tallha ;
Sharif, Muhammad ;
Rehman, Amjad ;
Saba, Tanzila .
COMPUTERS & ELECTRICAL ENGINEERING, 2021, 90
[22]   Classification of Positive COVID-19 CT Scans Using Deep Learning [J].
Khan, Muhammad Attique ;
Hussain, Nazar ;
Majid, Abdul ;
Alhaisoni, Majed ;
Bukhari, Syed Ahmad Chan ;
Kadry, Seifedine ;
Nam, Yunyoung ;
Zhang, Yu-Dong .
CMC-COMPUTERS MATERIALS & CONTINUA, 2021, 66 (03) :2923-2938
[23]   StomachNet: Optimal Deep Learning Features Fusion for Stomach Abnormalities Classification [J].
Khan, Muhammad Attique ;
Sarfraz, Muhammad Shahzad ;
Alhaisoni, Majed ;
Albesher, Abdulaziz A. ;
Wang, Shuihua ;
Ashraf, Imran .
IEEE ACCESS, 2020, 8 :197969-197981
[24]   Computer-Aided Gastrointestinal Diseases Analysis From Wireless Capsule Endoscopy: A Framework of Best Features Selection [J].
Khan, Muhammad Attique ;
Kadry, Seifedine ;
Alhaisoni, Majed ;
Nam, Yunyoung ;
Zhang, Yudong ;
Rajinikanth, Venkatesan ;
Sarfraz, Muhammad Shahzad .
IEEE ACCESS, 2020, 8 :132850-132859
[25]   Improved strategy for human action recognition; experiencing a cascaded design [J].
Khan, Muhammad Attique ;
Akram, Tallha ;
Sharif, Muhammad ;
Muhammad, Nazeer ;
Javed, Muhammad Younus ;
Naqvi, Syed Rameez .
IET IMAGE PROCESSING, 2020, 14 (05) :818-829
[26]   Hand-crafted and deep convolutional neural network features fusion and selection strategy: An application to intelligent human action recognition [J].
Khan, Muhammad Attique ;
Sharif, Muhammad ;
Akram, Tallha ;
Raza, Mudassar ;
Saba, Tanzila ;
Rehman, Amjad .
APPLIED SOFT COMPUTING, 2020, 87
[27]   An implementation of optimized framework for action classification using multilayers neural network on selected fused features [J].
Khan, Muhammad Attique ;
Akram, Tallha ;
Sharif, Muhammad ;
Javed, Muhammad Younus ;
Muhammad, Nazeer ;
Yasmin, Mussarat .
PATTERN ANALYSIS AND APPLICATIONS, 2019, 22 (04) :1377-1397
[28]   Learning a Hierarchy of Discriminative Space-Time Neighborhood Features for Human Action Recognition [J].
Kovashka, Adriana ;
Grauman, Kristen .
2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, :2046-2053
[29]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[30]   Recognition of human actions using CNN-GWO: a novel modeling of CNN for enhancement of classification performance [J].
Kumaran, N. ;
Vadivel, A. ;
Kumar, S. Saravana .
MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (18) :23115-23147