Method of Multi-Label Visual Emotion Recognition Fusing Fore-Background Features

被引:0
作者
Feng, Yuehua [1 ]
Wei, Ruoyan [1 ]
机构
[1] Hebei Univ Econ & Business, Sch Management Sci & Informat Engn, Shijiazhuang 050061, Peoples R China
来源
APPLIED SCIENCES-BASEL | 2024年 / 14卷 / 18期
基金
中国国家自然科学基金;
关键词
multi-label; emotion recognition; fore-background features; graph convolutional networks; correlation;
D O I
10.3390/app14188564
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
This paper proposes a method for multi-label visual emotion recognition that fuses fore-background features to address the following issues that visual-based multi-label emotion recognition often overlooks: the impacts of the background that the person is placed in and the foreground, such as social interactions between different individuals on emotion recognition; the simplification of multi-label recognition tasks into multiple binary classification tasks; and it ignores the global correlations between different emotion labels. First, a fore-background-aware emotion recognition model (FB-ER) is proposed, which is a three-branch multi-feature hybrid fusion network. It efficiently extracts body features by designing a core region unit (CR-Unit) that represents background features as background keywords and extracts depth map information to model social interactions between different individuals as foreground features. These three features are fused at both the feature and decision levels. Second, a multi-label emotion recognition classifier (ML-ERC) is proposed, which captures the relationship between different emotion labels by designing a label co-occurrence probability matrix and cosine similarity matrix, and uses graph convolutional networks to learn correlations between different emotion labels to generate a classifier that considers emotion correlations. Finally, the visual features are combined with the object classifier to enable the multi-label recognition of 26 different emotions. The proposed method was evaluated on the Emotic dataset, and the results show an improvement of 0.732% in the mAP and 0.007 in the Jaccard's coefficient compared with the state-of-the-art method.
引用
收藏
页数:21
相关论文
共 40 条
  • [11] STT-Net: Simplified Temporal Transformer for Emotion Recognition
    Khan, Mustaqeem
    El Saddik, Abdulmotaleb
    Deriche, Mohamed
    Gueaieb, Wail
    [J]. IEEE ACCESS, 2024, 12 : 86220 - 86231
  • [12] CVGG-19: Customized Visual Geometry Group Deep Learning Architecture for Facial Emotion Recognition
    Kim, Jung Hwan
    Poulose, Alwin
    Han, Dong Seog
    [J]. IEEE ACCESS, 2024, 12 (41557-41578): : 41557 - 41578
  • [13] Context Based Emotion Recognition Using EMOTIC Dataset
    Kosti, Ronak
    Alvarez, Jose M.
    Recasens, Adria
    Lapedriza, Agata
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (11) : 2755 - 2766
  • [14] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90
  • [15] Context-Aware Emotion Recognition Networks
    Lee, Jiyoung
    Kim, Seungryong
    Kim, Sunok
    Park, Jungin
    Sohn, Kwanghoon
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 10142 - 10151
  • [16] MegaDepth: Learning Single-View Depth Prediction from Internet Photos
    Li, Zhengqi
    Snavely, Noah
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 2041 - 2050
  • [17] Facial Expression Recognition Methods in the Wild Based on Fusion Feature of Attention Mechanism and LBP
    Liao, Jun
    Lin, Yuanchang
    Ma, Tengyun
    He, Songxiying
    Liu, Xiaofang
    He, Guotian
    [J]. SENSORS, 2023, 23 (09)
  • [18] 一种利用关联规则挖掘的多标记分类算法
    刘军煜
    贾修一
    [J]. 软件学报, 2017, 28 (11) : 2865 - 2878
  • [19] Liu SL, 2021, Arxiv, DOI arXiv:2107.10834
  • [20] Long T.D., 2022, Int. J. Mod. Educ. Comput. Sci. (IJMECS), V14, P53, DOI [10.5815/ijmecs.2022.06.05, DOI 10.5815/IJMECS.2022.06.05]