MSA-GCN: Multiscale Adaptive Graph Convolution Network for gait emotion recognition

被引:15
作者
Yin, Yunfei [1 ]
Jing, Li [1 ]
Huang, Faliang [2 ]
Yang, Guangchao [1 ]
Wang, Zhuowei [3 ]
机构
[1] Chongqing Univ, Chongqing, Peoples R China
[2] Nanning Normal Univ, Guangxi Key Lab Human Machine Interact & Intellige, Nanning, Peoples R China
[3] CSIRO, Space & Astron, Marshfield, Australia
关键词
Emotion recognition; Gait emotion recognition; Graph convolutional network; Multiscale mapping;
D O I
10.1016/j.patcog.2023.110117
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Gait emotion recognition plays a crucial role in the intelligent system. Most existing approaches identify emotions by focusing on local actions over time. However, some valuable observational facts that the effective distances of different emotions in the time domain are different, and the local actions during walking are quite similar, are put aside in those methods. And this ignorance often ends up impairing performance of emotion recognition. To address the issues, a novel model, named MSA-GCN (MultiScale Adaptive Graph Convolution Network), is proposed to utilize the valuable observational knowledge for improving emotion recognition performance. In the proposed model, an adaptive spatio-temporal graph convolution is designed to dynamically select convolution kernels to learn the spatio-temporal features of different emotions. Moreover, a Cross-Scale Mapping Interaction mechanism (CSMI) is proposed to construct an adaptive adjacency matrix for high-quality aggregation of the multiscale information. Extensive experimental results on public datasets indicate that, compared with the state-of-the-art methods, the proposed approach achieves better performance in terms of emotion recognition accuracy, and shows the proposed approach is promising.
引用
收藏
页数:11
相关论文
共 50 条
[11]   Context-aware fusion: A case study on fusion of gait and face for human identification in video [J].
Geng, Xin ;
Smith-Miles, Kate ;
Wang, Liang ;
Lie, Ming ;
Wu, Qiang .
PATTERN RECOGNITION, 2010, 43 (10) :3660-3673
[12]   Bi-directional Heterogeneous Graph Hashing towards Efficient Outfit Recommendation [J].
Guan, Weili ;
Song, Xuemeng ;
Zhang, Haoyu ;
Liu, Meng ;
Yeh, Chung-Hsing ;
Chang, Xiaojun .
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022,
[13]   Personalized Fashion Compatibility Modeling via Metapath-guided Heterogeneous Graph Learning [J].
Guan, Weili ;
Jiao, Fangkai ;
Song, Xuemeng ;
Wen, Haokun ;
Yeh, Chung-Hsing ;
Chang, Xiaojun .
PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, :482-491
[14]   Not all is noticed: Kinematic cues of emotion-specific gait [J].
Halovic, Shaun ;
Kroos, Christian .
HUMAN MOVEMENT SCIENCE, 2018, 57 :478-488
[15]   Recognition of Affect Based on Gait Patterns [J].
Karg, Michelle ;
Kuehnlenz, Kolja ;
Buss, Martin .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2010, 40 (04) :1050-1061
[16]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[17]  
Lee J, 2015, 16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, P1537
[18]   Identifying Emotions from Non-Contact Gaits Information Based on Microsoft Kinects [J].
Li, Baobin ;
Zhu, Changye ;
Li, Shun ;
Zhu, Tingshao .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2018, 9 (04) :585-591
[19]   Dynamic Multiscale Graph Neural Networks for 3D Skeleton-Based Human Motion Prediction [J].
Li, Maosen ;
Chen, Siheng ;
Zhao, Yangheng ;
Zhang, Ya ;
Wang, Yanfeng ;
Tian, Qi .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :211-220
[20]   Spatio-Temporal LSTM with Trust Gates for 3D Human Action Recognition [J].
Liu, Jun ;
Shahroudy, Amir ;
Xu, Dong ;
Wang, Gang .
COMPUTER VISION - ECCV 2016, PT III, 2016, 9907 :816-833