Relevance-aware visual entity filter network for multimodal aspect-based sentiment analysis

被引:0
|
作者
Chen, Yifan [1 ]
Xiong, Haoliang [1 ]
Li, Kuntao [1 ]
Mai, Weixing [1 ]
Xue, Yun [1 ]
Cai, Qianhua [1 ]
Li, Fenghuan [2 ]
机构
[1] South China Normal Univ, Sch Elect & Informat Engn, Foshan 528225, Guangdong, Peoples R China
[2] Guangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou 510006, Guangdong, Peoples R China
关键词
Multimodal aspect-based sentiment analysis (MABSA); Relevance-aware visual entity filter; External knowledge; Image-aspect relevance; Cross-modal alignment;
D O I
10.1007/s13042-024-02342-w
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multimodal aspect-based sentiment analysis, which aims to identify the sentiment polarities over each aspect mentioned in an image-text pair, has sparked considerable research interest in the field of multimodal analysis. Despite existing approaches have shown remarkable results in incorporating external knowledge to enhance visual entity information, they still suffer from two problems: (1) the image-aspect global relevance. (2) the entity-aspect local alignment. To tackle these issues, we propose a Relevance-Aware Visual Entity Filter Network (REF) for MABSA. Specifically, we utilize the nouns of ANPs extracted from the given image as bridges to facilitate cross-modal feature alignment. Moreover, we introduce an additional "UNRELATED" marker word and utilize Contrastive Content Re-sourcing (CCR) and Contrastive Content Swapping (CCS) constraints to obtain accurate attention weight to identify image-aspect relevance for dynamically controlling the contribution of visual information. We further adopt the accurate reversed attention weight distributions to selectively filter out aspect-unrelated visual entities for better entity-aspect alignment. Comprehensive experimental results demonstrate the consistent superiority of our REF model over state-of-the-art approaches on the Twitter-2015 and Twitter-2017 datasets.
引用
收藏
页码:1389 / 1402
页数:14
相关论文
共 50 条
  • [41] AMIFN: Aspect-guided multi-view interactions and fusion network for multimodal aspect-based sentiment analysis
    Yang, Juan
    Xu, Mengya
    Xiao, Yali
    Du, Xu
    Neurocomputing, 2024, 573
  • [42] Aspect-based sentiment analysis with gated alternate neural network
    Liu, Ning
    Shen, Bo
    KNOWLEDGE-BASED SYSTEMS, 2020, 188
  • [43] Relational Graph Attention Network for Aspect-based Sentiment Analysis
    Wang, Kai
    Shen, Weizhou
    Yang, Yunyi
    Quan, Xiaojun
    Wang, Rui
    58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), 2020, : 3229 - 3238
  • [44] Filter channel network based on contextual position weight for aspect-based sentiment classification
    Zhu, Chao
    Yi, Benshun
    Luo, Laigan
    JOURNAL OF SUPERCOMPUTING, 2024, 80 (12): : 17874 - 17894
  • [45] Convolution-based Memory Network for Aspect-based Sentiment Analysis
    Fan, Chuang
    Gao, Qinghong
    Du, Jiachen
    Gui, Lin
    Xu, Ruifeng
    Wong, Kam-Fai
    ACM/SIGIR PROCEEDINGS 2018, 2018, : 1161 - 1164
  • [46] Multi-grained fusion network with self-distillation for aspect-based multimodal sentiment analysis
    Yang, Juan
    Xiao, Yali
    Du, Xu
    KNOWLEDGE-BASED SYSTEMS, 2024, 293
  • [47] Joint Modal Circular Complementary Attention for Multimodal Aspect-Based Sentiment Analysis
    Liu, Hao
    He, Lijun
    Liang, Jiaxi
    2024 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS, ICMEW 2024, 2024,
  • [48] Self-adaptive attention fusion for multimodal aspect-based sentiment analysis
    Wang, Ziyue
    Guo, Junjun
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2024, 21 (01) : 1305 - 1320
  • [49] Multilayer interactive attention bottleneck transformer for aspect-based multimodal sentiment analysis
    Sun, Jiachang
    Zhu, Fuxian
    MULTIMEDIA SYSTEMS, 2025, 31 (01)
  • [50] MASAD: A large-scale dataset for multimodal aspect-based sentiment analysis
    Zhou, Jie
    Zhao, Jiabao
    Huang, Jimmy Xiangji
    Hu, Qinmin Vivian
    He, Liang
    NEUROCOMPUTING, 2021, 455 : 47 - 58