Emoji multimodal microblog sentiment analysis based on mutual attention mechanism

被引:1
|
作者
Lou, Yinxia [1 ]
Zhou, Junxiang [2 ]
Zhou, Jun [3 ]
Ji, Donghong [3 ]
Zhang, Qing [4 ]
机构
[1] Jianghan Univ, Sch Artificial Intelligence, Wuhan 430056, Peoples R China
[2] Shangqiu Normal Univ, Sch Informat Technol, Shangqiu 476000, Peoples R China
[3] Wuhan Univ, Sch Cyber Sci & Engn, Key Lab Aerosp Informat Secur & Trusted Comp, Minist Educ, Wuhan 430072, Peoples R China
[4] North China DEAN Power Engn Beijing Co Ltd, Beijing 100120, Peoples R China
来源
SCIENTIFIC REPORTS | 2024年 / 14卷 / 01期
关键词
Emoji; Mutual attention mechanism; Multimodal sentiment analysis; Multimodal fusion;
D O I
10.1038/s41598-024-80167-x
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Emojis, utilizing visual means, mimic human facial expressions and postures to convey emotions and opinions. They are widely used in social media platforms such as Sina Weibo, and have become a crucial feature for sentiment analysis. However, existing approaches often treat emojis as special symbols or convert them into text labels, thereby neglecting the rich visual information of emojis. We propose a novel multimodal information integration model for emoji microblog sentiment analysis. To effectively leverage the emoji visual information, the model employs a text-emoji visual mutual attention mechanism. Experiments on a manually annotated microblog dataset show that compared to the baseline models without incorporating emoji visual information, the proposed model achieves improvements of 1.37% in macro F1 score and 2.30% in accuracy, respectively. To facilitate the related research, our corpus will be publicly available at https://github.com/yx100/Emojis/blob/main/weibo-emojis-annotation.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] GATED MECHANISM FOR ATTENTION BASED MULTIMODAL SENTIMENT ANALYSIS
    Kumar, Ayush
    Vepa, Jithendra
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 4477 - 4481
  • [2] Emoji-Based Sentiment Analysis Using Attention Networks
    Lou, Yinxia
    Zhang, Yue
    Li, Fei
    Qian, Tao
    Ji, Donghong
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2020, 19 (05)
  • [3] Multimodal sentiment analysis based on multi-head attention mechanism
    Xi, Chen
    Lu, Guanming
    Yan, Jingjie
    ICMLSC 2020: PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND SOFT COMPUTING, 2020, : 34 - 39
  • [4] Multimodal Mutual Attention-Based Sentiment Analysis Framework Adapted to Complicated Contexts
    He, Lijun
    Wang, Ziqing
    Wang, Liejun
    Li, Fan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (12) : 7131 - 7143
  • [5] Multimodal sentiment analysis based on multiple attention
    Wang, Hongbin
    Ren, Chun
    Yu, Zhengtao
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 140
  • [6] The Weighted Cross-Modal Attention Mechanism With Sentiment Prediction Auxiliary Task for Multimodal Sentiment Analysis
    Chen, Qiupu
    Huang, Guimin
    Wang, Yabing
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 2689 - 2695
  • [7] Attention fusion network for multimodal sentiment analysis
    Yuanyi Luo
    Rui Wu
    Jiafeng Liu
    Xianglong Tang
    Multimedia Tools and Applications, 2024, 83 : 8207 - 8217
  • [8] Attention fusion network for multimodal sentiment analysis
    Luo, Yuanyi
    Wu, Rui
    Liu, Jiafeng
    Tang, Xianglong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (03) : 8207 - 8217
  • [9] Sentiment analysis of social media comments based on multimodal attention fusion network
    Liu, Ziyu
    Yang, Tao
    Chen, Wen
    Chen, Jiangchuan
    Li, Qinru
    Zhang, Jun
    APPLIED SOFT COMPUTING, 2024, 164
  • [10] Multilayer interactive attention bottleneck transformer for aspect-based multimodal sentiment analysis
    Sun, Jiachang
    Zhu, Fuxian
    MULTIMEDIA SYSTEMS, 2025, 31 (01)