Joint Relation Modeling and Feature Learning for Class-Incremental Facial Expression Recognition

被引:0
作者
Lv, Yuanling [1 ,2 ]
Yan, Yan [1 ,2 ]
Wang, Hanzi [1 ]
机构
[1] Xiamen Univ, Sch Informat, Fujian Key Lab Sensing & Comp Smart City, Xiamen, Peoples R China
[2] Xidian Univ, State Key Lab Integrated Serv Networks, Xian, Peoples R China
来源
PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT V | 2024年 / 14429卷
基金
中国国家自然科学基金;
关键词
Facial expression recognition; Class-incremental learning; Relation modeling; Feature learning;
D O I
10.1007/978-981-99-8469-5_11
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to the diversity of human emotions, it is often difficult to collect all the expression categories at once in many practical applications. In this paper, we investigate facial expression recognition (FER) under the class-incremental learning (CIL) paradigm, where we define easily-accessible basic expressions as an initial task and learn new compound expressions continuously. To this end, we propose a novel joint relation modeling and feature learning (JRF) method, which mainly consists of a local nets module (LNets), a dynamic relation modeling module (DRM), and an adaptive feature learning module (AFL) by taking advantage of the relationship between old and new expressions, effectively alleviating the stability-plasticity dilemma. Specifically, we develop LNets to capture subtle distinctions across expressions, where a novel diversity loss is designed to locate informative facial regions in each local net. Then, we introduce DRM to enhance feature representations based on two types of graph convolutional networks (GCNs) (including an image-shared GCN and two image-specific GCNs) from the perspectives of global-local graphs and old-new classes. Finally, we design AFL to explicitly fuse old and new class features via a weight selection mechanism. Extensive experiments on both in-the-lab and in-the-wild facial expression databases demonstrate the superiority of our method in comparison with several state-of-the-art methods for class-incremental FER.
引用
收藏
页码:134 / 146
页数:13
相关论文
共 30 条
  • [1] Rainbow Memory: Continual Learning with a Memory of Diverse Samples
    Bang, Jihwan
    Kim, Heesu
    Yoo, YoungJoon
    Ha, Jung-Woo
    Choi, Jonghyun
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 8214 - 8223
  • [2] EmotioNet: An accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild
    Benitez-Quiroz, C. Fabian
    Srinivasan, Ramprakash
    Martinez, Aleix M.
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 5562 - 5570
  • [3] The Devil is in the Channels: Mutual-Channel Loss for Fine-Grained Image Classification
    Chang, Dongliang
    Ding, Yifeng
    Xie, Jiyang
    Bhunia, Ayan Kumar
    Li, Xiaoxu
    Ma, Zhanyu
    Wu, Ming
    Guo, Jun
    Song, Yi-Zhe
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 4683 - 4695
  • [4] An Attention Enhanced Graph Convolutional Network for Semantic Segmentation
    Chen, Ao
    Zhou, Yue
    [J]. PATTERN RECOGNITION AND COMPUTER VISION, PT I, PRCV 2020, 2020, 12305 : 734 - 745
  • [5] Douillard Arthur, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12365), P86, DOI 10.1007/978-3-030-58565-5_6
  • [6] Compound facial expressions of emotion
    Du, Shichuan
    Tao, Yong
    Martinez, Aleix M.
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2014, 111 (15) : E1454 - E1462
  • [7] CONSTANTS ACROSS CULTURES IN FACE AND EMOTION
    EKMAN, P
    FRIESEN, WV
    [J]. JOURNAL OF PERSONALITY AND SOCIAL PSYCHOLOGY, 1971, 17 (02) : 124 - &
  • [8] Facial Expression Recognition in the Wild via Deep Attentive Center Loss
    Farzaneh, Amir Hossein
    Qi, Xiaojun
    [J]. 2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021, 2021, : 2401 - 2410
  • [9] Goodfellow I., 2013, P 30 INT C MACH LEAR, P1319
  • [10] Grossberg S, 2013, NEURAL NETWORKS, V37, P1, DOI [10.1016/j.neunet.2012.09.017, 10.1016/j.neunet.2011.10.011]