An Emotion Recognition Method Based On Feature Fusion and Self-Supervised Learning

被引:0
|
作者
Cao, Xuanmeng [1 ]
Sun, Ming [1 ]
机构
[1] Univ Elect Sci & Technol China, Dept Comp Sci & Engn, Chengdu, Peoples R China
来源
2023 2ND ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING, CACML 2023 | 2023年
关键词
emotion recognition; physiological signals; self-supervised learning; feature fusion;
D O I
10.1145/3590003.3590041
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Emotional diseases being represented in many kinds of human mental and cardiac problems, demanding requirements are imposed on accurate emotion recognition. Deep learning methods have gained widespread application in the field of emotion recognition, utilizing physiological signals. However, many existing methods rely solely on deep features, which can be difficult to interpret and may not provide a comprehensive understanding of physiological signals. To address this issue, we propose a novel emotion recognition method based on feature fusion and self-supervised learning. This approach combines shallow features and deep learning features, resulting in a more holistic and interpretable approach to analyzing physiological signals. In addition, we transferred the self-supervised learning method from processing images to signals, which learns sophisticated and informative features from unlabeled signal data. Our experimental results are conducted on WESAD, a publicly available dataset and the proposed model shows significant improvement in performance, which confirms the superiority of our proposed method compared to state-of-the-art methods.
引用
收藏
页码:216 / 221
页数:6
相关论文
共 50 条
  • [41] Novel feature fusion method for speech emotion recognition based on multiple kernel learning
    Zhao, L. (zhaoli@seu.edu.cn), 1600, Southeast University (29):
  • [42] DEEP INVESTIGATION OF INTERMEDIATE REPRESENTATIONS IN SELF-SUPERVISED LEARNING MODELS FOR SPEECH EMOTION RECOGNITION
    Zhu, Zhi
    Sato, Yoshinao
    2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
  • [43] Video-Audio Emotion Recognition Based on Feature Fusion Deep Learning Method
    Song, Yanan
    Cai, Yuanyang
    Tan, Lizhe
    2021 IEEE INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS), 2021, : 611 - 616
  • [44] Feature Fusion of Speech Emotion Recognition Based on Deep Learning
    Liu, Gang
    He, Wei
    Jin, Bicheng
    PROCEEDINGS OF 2018 INTERNATIONAL CONFERENCE ON NETWORK INFRASTRUCTURE AND DIGITAL CONTENT (IEEE IC-NIDC), 2018, : 193 - 197
  • [45] Self-supervised Visual Feature Learning and Classification Framework: Based on Contrastive Learning
    Wang, Zhibo
    Yan, Shen
    Zhang, Xiaoyu
    Lobo, Niels Da Vitoria
    16TH IEEE INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV 2020), 2020, : 719 - 725
  • [46] SELFGAIT: A SPATIOTEMPORAL REPRESENTATION LEARNING METHOD FOR SELF-SUPERVISED GAIT RECOGNITION
    Liu, Yiqun
    Zeng, Yi
    Pu, Jian
    Shan, Hongming
    He, Peiyang
    Zhang, Junping
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 2570 - 2574
  • [47] Analysis of Self-Supervised Learning and Dimensionality Reduction Methods in Clustering-Based Active Learning for Speech Emotion Recognition
    Vaaras, Einari
    Airaksinen, Manu
    Rasanen, Okko
    INTERSPEECH 2022, 2022, : 1143 - 1147
  • [48] Consistency self-supervised learning method for robust automatic speech recognition
    Gao, Changfeng
    Cheng, Gaofeng
    Zhang, Pengyuan
    Shengxue Xuebao/Acta Acustica, 2023, 48 (03): : 578 - 587
  • [49] An image retrieval approach based on feature extraction and self-supervised learning
    Kolahkaj, Maral
    2022 SECOND INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING AND HIGH PERFORMANCE COMPUTING (DCHPC), 2022, : 46 - 51
  • [50] Self-Supervised Monocular Depth Estimation Based on Full Scale Feature Fusion
    Wang C.
    Chen Y.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2023, 35 (05): : 667 - 675