Lightweight ViT Model for Micro-Expression Recognition Enhanced by Transfer Learning

被引:10
作者
Liu, Yanju [1 ]
Li, Yange [2 ]
Yi, Xinhai [2 ]
Hu, Zuojin [1 ]
Zhang, Huiyu [2 ]
Liu, Yanzhong [2 ]
机构
[1] Nanjing Normal Univ Special Educ, Sch Math & Informat Sci, Nanjing, Peoples R China
[2] Qiqihar Univ, Sch Comp & Control Engn, Qiqihar, Peoples R China
来源
FRONTIERS IN NEUROROBOTICS | 2022年 / 16卷
关键词
computer vision; deep learning; convolutional neural network; vision transformer; micro-expression recognition;
D O I
10.3389/fnbot.2022.922761
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As opposed to macro-expressions, micro-expressions are subtle and not easily detectable emotional expressions, often containing rich information about mental activities. The practical recognition of micro-expressions is essential in interrogation and healthcare. Neural networks are currently one of the most common approaches to micro-expression recognition. Still, neural networks often increase their complexity when improving accuracy, and overly large neural networks require extremely high hardware requirements for running equipment. In recent years, vision transformers based on self-attentive mechanisms have achieved accuracy in image recognition and classification that is no less than that of neural networks. Still, the drawback is that without the image-specific biases inherent to neural networks, the cost of improving accuracy is an exponential increase in the number of parameters. This approach describes training a facial expression feature extractor by transfer learning and then fine-tuning and optimizing the MobileViT model to perform the micro-expression recognition task. First, the CASME II, SAMM, and SMIC datasets are combined into a compound dataset, and macro-expression samples are extracted from the three macro-expression datasets. Each macro-expression sample and micro-expression sample are pre-processed identically to make them similar. Second, the macro-expression samples were used to train the MobileNetV2 block in MobileViT as a facial expression feature extractor and to save the weights when the accuracy was highest. Finally, some of the hyperparameters of the MobileViT model are determined by grid search and then fed into the micro-expression samples for training. The samples are classified using an SVM classifier. In the experiments, the proposed method obtained an accuracy of 84.27%, and the time to process individual samples was only 35.4 ms. Comparative experiments show that the proposed method is comparable to state-of-the-art methods in terms of accuracy while improving recognition efficiency.
引用
收藏
页数:15
相关论文
共 46 条
  • [1] Aifanti N., 2010, Image analysis for multimedia interactive services (WIAMIS), 2010 11th international workshop on, P1
  • [2] Cost-Effective CNNs for Real-Time Micro-Expression Recognition
    Belaiche, Reda
    Liu, Yu
    Migniot, Cyrille
    Ginhac, Dominique
    Yang, Fan
    [J]. APPLIED SCIENCES-BASEL, 2020, 10 (14):
  • [3] Chaudhry R, 2009, PROC CVPR IEEE, P1932, DOI 10.1109/CVPRW.2009.5206821
  • [4] Xception: Deep Learning with Depthwise Separable Convolutions
    Chollet, Francois
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1800 - 1807
  • [5] Davison A.K., 2014, P EUROPEAN C COMPUTE, P111, DOI [10.1007/978-3-319-16181-5-8, DOI 10.1007/978-3-319-16181-5-8]
  • [6] SAMM: A Spontaneous Micro-Facial Movement Dataset
    Davison, Adrian K.
    Lansley, Cliff
    Costen, Nicholas
    Tan, Kevin
    Yap, Moi Hoon
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2018, 9 (01) : 116 - 129
  • [7] Dosovitskiy A., 2020, INT C LEARNING REPRE
  • [8] Ekman P., 2009, PHILOS DECEPTION, V1, P118
  • [9] Sigmoid-weighted linear units for neural network function approximation in reinforcement learning
    Elfwing, Stefan
    Uchibe, Eiji
    Doya, Kenji
    [J]. NEURAL NETWORKS, 2018, 107 : 3 - 11
  • [10] Franke M., 2009, ELECT TECHNOLOGY ISS, P1