Multimodal Emotion Classification With Multi-Level Semantic Reasoning Network

被引:11
作者
Zhu, Tong [1 ]
Li, Leida [2 ]
Yang, Jufeng [3 ]
Zhao, Sicheng [4 ]
Xiao, Xiao [5 ]
机构
[1] China Univ Min & Technol, Sch Informat & Control Engn, Xuzhou 221116, Peoples R China
[2] Xidian Univ, Sch Artificial Intelligence, Xian 710071, Peoples R China
[3] Nankai Univ, Sch Comp & Control Engn, Tianjin 300350, Peoples R China
[4] Tsinghua Univ, BNRist, Beijing 100084, Peoples R China
[5] Xidian Univ, Sch Telecommun Engn, Xidian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Semantics; Sentiment analysis; Visualization; Cognition; Feature extraction; Task analysis; Social networking (online); Multimodal emotion classification; Graph attention module; Semantic reasoning; SENTIMENT ANALYSIS;
D O I
10.1109/TMM.2022.3214989
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, people are accustomed to posting images and associated text for expressing their emotions on social networks. Accordingly, multimodal sentiment analysis has drawn increasingly more attention. Most of the existing image-text multimodal sentiment analysis methods simply predict the sentiment polarity. However, the same sentiment polarity may correspond to quite different emotions, such as happiness vs. excitement and disgust vs. sadness. Therefore, sentiment polarity is ambiguous and may not convey the accurate emotions that people want to express. Psychological research has shown that objects and words are emotional stimuli and that semantic concepts can affect the role of stimuli. Inspired by this observation, this paper presents a new MUlti-Level SEmantic Reasoning network (MULSER) for fine-grained image-text multimodal emotion classification, which not only investigates the semantic relationship among objects and words respectively, but also explores the semantic relationship between regional objects and global concepts. For image modality, we first build graphs to extract objects and global representation, and employ a graph attention module to perform bilevel semantic reasoning. Then, a joint visual graph is built to learn the regional-global semantic relations. For text modality, we build a word graph and further apply graph attention to reinforce the interdependencies among words in a sentence. Finally, a cross-modal attention fusion module is proposed to fuse semantic-enhanced visual and textual features, based on which informative multimodal representations are obtained for fine-grained emotion classification. The experimental results on public datasets demonstrate the superiority of the proposed model over the state-of-the-art methods.
引用
收藏
页码:6868 / 6880
页数:13
相关论文
共 60 条
  • [1] Bahdanau D, 2016, Arxiv, DOI [arXiv:1409.0473, 10.48550/arXiv.1409.0473, DOI 10.48550/ARXIV.1409.0473]
  • [2] Brockschmidt M, 2016, 4 INT C LEARN REPR I
  • [3] The perception and categorisation of emotional stimuli: A review
    Brosch, Tobias
    Pourtois, Gilles
    Sander, David
    [J]. COGNITION & EMOTION, 2010, 24 (03) : 377 - 400
  • [4] Chen T, 2014, Arxiv, DOI arXiv:1410.8586
  • [5] Twitter Sentiment Analysis via Bi-sense Emoji Embedding and Attention-based LSTM
    Chen, Yuxiao
    Yuan, Jianbo
    You, Quanzeng
    Luo, Jiebo
    [J]. PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 117 - 125
  • [6] DIFFERENCES IN THE AFFECTIVE PROCESSING OF WORDS AND PICTURES
    DEHOUWER, J
    HERMANS, D
    [J]. COGNITION & EMOTION, 1994, 8 (01) : 1 - 20
  • [7] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [8] Dosovitskiy A., 2021, IMAGE IS WORTH 1616
  • [9] Word-of-Mouth Understanding: Entity-Centric Multimodal Aspect-Opinion Mining in Social Media
    Fang, Quan
    Xu, Changsheng
    Sang, Jitao
    Hossain, M. Shamim
    Muhammad, Ghulam
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2015, 17 (12) : 2281 - 2296
  • [10] Gao DF, 2020, PROC CVPR IEEE, P12743, DOI 10.1109/CVPR42600.2020.01276