Skeleton-Based Gait Recognition Based on Deep Neuro-Fuzzy Network

被引:1
作者
Qiu, Jiefan [1 ]
Jia, Yizhe [1 ]
Chen, Xingyu [2 ]
Zhao, Xiangyun [1 ]
Feng, Hailin [3 ]
Fang, Kai [3 ]
机构
[1] ZheJiang Univ Technol, Sch Comp Sci & Technol, Hangzhou 310023, Peoples R China
[2] ZJUT Aishiguang New Qual Meal Nutr Res Inst, Large Model Dept, Hangzhou 310005, Peoples R China
[3] Zhejiang Agr & Forestry Univ, Sch Math & Comp Sci, Hangzhou 311300, Peoples R China
基金
中国国家自然科学基金;
关键词
Feature extraction; Gait recognition; Skeleton; Pose estimation; Target tracking; Accuracy; Robustness; Three-dimensional displays; Legged locomotion; Fuzzy neural networks; Fuzzy pose estimation; gait recognition; multiperson scenarios; skeleton;
D O I
10.1109/TFUZZ.2024.3444489
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Gait recognition aims to identify users by their walking patterns. Compared with appearance-based methods, skeleton-based methods exhibit well robustness to cluttered backgrounds, carried items, and clothing variations. However, skeleton extraction faces the wrong human tracking and keypoints missing problems, especially under multiperson scenarios. To address above issues, this article proposes a novel gait recognition method using deep neural network specifically designed for multiperson scenarios. The method consists of individual gait separate module (IGSM) and fuzzy skeleton completion network (FU-SCN). To achieve effective human tracking, IGSM employs root-skeleton keypoints predictions and object keypoint similarity (OKS)-based skeleton calculation to separate individual gait sets when multiple persons exist. In addition, keypoints missing renders human poses estimation fuzzy. We propose FU-SCN, a deep neuro-fuzzy network, to enhances the interpretability of the fuzzy pose estimation via generating fine-grained gait representation. FU-SCN utilizes fuzzy bottleneck structure to extract features on low-dimension keypoints, and multiscale fusion to extract dissimilar relations of human body during walking on each scale. Extensive experiments are conducted on the CASIA-B dataset and our multigait dataset. The results show that our method is one of the SOTA methods and shows outperformance under complex scenarios. Compared with PTSN, PoseMapGait, JointsGait, GaitGraph2, and CycleGait, our method achieves an average accuracy improvement of 53.77%, 42.07%, 25.3%, 13.47%, and 9.5%, respectively, and it keeps low time cost with average 180 ms using edge devices.
引用
收藏
页码:431 / 443
页数:13
相关论文
共 47 条
[1]  
Babaee M, 2019, IEEE IMAGE PROC, P2596, DOI [10.1109/icip.2019.8803236, 10.1109/ICIP.2019.8803236]
[2]   Fall detection and fall risk assessment in older person using wearable sensors: A systematic review [J].
Bet, Patricia ;
Castro, Paula C. ;
Ponti, Moacir A. .
INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2019, 130
[3]   Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields [J].
Cao, Zhe ;
Simon, Tomas ;
Wei, Shih-En ;
Sheikh, Yaser .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1302-1310
[4]  
Chao HQ, 2019, AAAI CONF ARTIF INTE, P8126
[5]   Adversarial learning-based skeleton synthesis with spatial-channel attention for robust gait recognition [J].
Chen, Ying ;
Xia, Shixiong ;
Zhao, Jiaqi ;
Zhou, Yong ;
Niu, Qiang ;
Yao, Rui ;
Zhu, Dongjun ;
Chen, Hao .
MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (01) :1489-1504
[6]   Monocular 3D Multi-Person Pose Estimation by Integrating Top-Down and Bottom-Up Networks [J].
Cheng, Yu ;
Wang, Bo ;
Yang, Bo ;
Tan, Robby T. .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :7645-7655
[7]   GaitPart: Temporal Part-based Model for Gait Recognition [J].
Fan, Chao ;
Peng, Yunjie ;
Cao, Chunshui ;
Liu, Xu ;
Hou, Saihui ;
Chi, Jiannan ;
Huang, Yongzhen ;
Li, Qing ;
He, Zhiqiang .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :14213-14221
[8]   AlphaPose: Whole-Body Regional Multi-Person Pose Estimation and Tracking in Real-Time [J].
Fang, Hao-Shu ;
Li, Jiefeng ;
Tang, Hongyang ;
Xu, Chao ;
Zhu, Haoyi ;
Xiu, Yuliang ;
Li, Yong-Lu ;
Lu, Cewu .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (06) :7157-7173
[9]  
Feng Y, 2016, INT C PATT RECOG, P325, DOI 10.1109/ICPR.2016.7899654
[10]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778