FER-PCVT: Facial Expression Recognition with Patch-Convolutional Vision Transformer for Stroke Patients

被引:7
作者
Fan, Yiming [1 ]
Wang, Hewei [2 ]
Zhu, Xiaoyu [1 ]
Cao, Xiangming [3 ]
Yi, Chuanjian [4 ]
Chen, Yao [5 ]
Jia, Jie [2 ]
Lu, Xiaofeng [1 ,6 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
[2] Fudan Univ, Huashan Hosp, Dept Rehabil, Shanghai 200040, Peoples R China
[3] Nantong Univ, Dept Oncol, Jiangyin Peoples Hosp, Wuxi 214400, Peoples R China
[4] Qingdao Univ, Affiliated Hosp, Dept Rehabil, Qingdao 266000, Peoples R China
[5] Shanghai Third Rehabil Hosp, Dept Rehabil, Shanghai 200436, Peoples R China
[6] Shanghai Univ, Wenzhou Inst, Wenzhou 325000, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
facial expression recognition (FER); vision transformer (ViT); convolutional neural networks (CNNs); stroke; rehabilitation; SYSTEM;
D O I
10.3390/brainsci12121626
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Early rehabilitation with the right intensity contributes to the physical recovery of stroke survivors. In clinical practice, physicians determine whether the training intensity is suitable for rehabilitation based on patients' narratives, training scores, and evaluation scales, which puts tremendous pressure on medical resources. In this study, a lightweight facial expression recognition algorithm is proposed to diagnose stroke patients' training motivations automatically. First, the properties of convolution are introduced into the Vision Transformer's structure, allowing the model to extract both local and global features of facial expressions. Second, the pyramid-shaped feature output mode in Convolutional Neural Networks is also introduced to reduce the model's parameters and calculation costs significantly. Moreover, a classifier that can better classify facial expressions of stroke patients is designed to improve performance further. We verified the proposed algorithm on the Real-world Affective Faces Database (RAF-DB), the Face Expression Recognition Plus Dataset (FER+), and a private dataset for stroke patients. Experiments show that the backbone network of the proposed algorithm achieves better performance than Pyramid Vision Transformer (PvT) and Convolutional Vision Transformer (CvT) with fewer parameters and Floating-point Operations Per Second (FLOPs). In addition, the algorithm reaches an 89.44% accuracy on the RAF-DB dataset, which is higher than other recent studies. In particular, it obtains an accuracy of 99.81% on the private dataset, with only 4.10M parameters.
引用
收藏
页数:20
相关论文
共 52 条
[1]   Down Syndrome Face Recognition: A Review [J].
Agbolade, Olalekan ;
Nazri, Azree ;
Yaakob, Razali ;
Ghani, Abdul Azim ;
Cheah, Yoke Kqueen .
SYMMETRY-BASEL, 2020, 12 (07)
[2]  
[Anonymous], 1977, Manual for the facial action coding system
[3]  
Aouayeb M, 2021, Arxiv, DOI arXiv:2107.03107
[4]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[5]   Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution [J].
Barsoum, Emad ;
Zhang, Cha ;
Ferrer, Cristian Canton ;
Zhang, Zhengyou .
ICMI'16: PROCEEDINGS OF THE 18TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2016, :279-283
[6]   Early rehabilitation after stroke [J].
Bernhardt, Julie ;
Godecke, Erin ;
Johnson, Liam ;
Langhorne, Peter .
CURRENT OPINION IN NEUROLOGY, 2017, 30 (01) :48-54
[7]   Implementation of wavelet packet transform and non linear analysis for emotion classification in stroke patient using brain signals [J].
Bong, Siao Zheng ;
Wan, Khairunizam ;
Murugappan, M. ;
Ibrahim, Norlinah Mohamed ;
Rajamanickam, Yuvaraj ;
Mohamad, Khairiyah .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2017, 36 :102-112
[8]   A survey on active learning and human-in-the-loop deep learning for medical image analysis [J].
Budd, Samuel ;
Robinson, Emma C. ;
Kainz, Bernhard .
MEDICAL IMAGE ANALYSIS, 2021, 71
[9]   A review of the application of deep learning in medical image classification and segmentation [J].
Cai, Lei ;
Gao, Jingyang ;
Zhao, Di .
ANNALS OF TRANSLATIONAL MEDICINE, 2020, 8 (11)
[10]  
Dosovitskiy A, 2021, Arxiv, DOI [arXiv:2010.11929, DOI 10.48550/ARXIV.2010.11929]