Domain-adaptive transfer network for visual-textual cross-domain sentiment classification

被引:0
作者
Wang, Yuan [1 ,2 ]
Tohti, Turdi [1 ,2 ]
Han, Dongfang [1 ,2 ]
Zuo, Zicheng [1 ,2 ]
Liang, Yi [1 ,2 ]
Liao, Yuanyuan [1 ,2 ]
Yang, Qingwen [1 ,2 ]
Hamdulla, Askar [1 ,2 ]
机构
[1] Xinjiang Univ, Sch Comp Sci & Technol, Urumqi 830017, Peoples R China
[2] Xinjiang Key Lab Signal Detect & Proc, Urumqi 830017, Peoples R China
基金
中国国家自然科学基金;
关键词
Multimodal sentiment analysis; Domain adaptation; Adversarial learning; Cross-domain sentiment analysis; Knowledge transfer;
D O I
10.1007/s11227-025-07273-z
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Cross-domain sentiment analysis aims to address the problem of insufficient labeled data by transferring invariant knowledge across domains. Existing studies focus on unimodal domain transfer, but modal differences hinder the transfer of domain information and limit the acquisition of domain-invariant knowledge in multimodal data. Therefore, we propose a domain-adaptive transfer network (DATN) for multimodal cross-domain sentiment analysis. The joint representation is acquired through a bidirectional visual-textual interactive fusion network, and adversarial discriminative domain adaptation is employed to learn the marginal domain shared knowledge in the joint representation. The performance of multimodal domain adaptive modules aligns with conditional distributions. Extensive experiments on public and self-constructed datasets demonstrate the effectiveness of the model and show that self-constructed datasets have the potential to serve as a new benchmark. Compared with the best-performing method, the model's accuracy on public datasets increased by 3.6% and 8.2%, respectively, and on self-built datasets by 9.2% and 3.3%, respectively.
引用
收藏
页数:30
相关论文
共 74 条
[1]  
Al Adel A., 2021, 2021 INT C ENG TEL E
[2]  
[Anonymous], 2023, GPT-4V(ision) system card
[3]  
Arbel M, 2019, ADV NEUR IN, V32
[4]   Multimodal Sentiment Analysis of #MeToo Tweets using Focal Loss (Grand Challenge) [J].
Basu, Priyam ;
Tiwari, Soham ;
Mohanty, Joseph ;
Karmakar, Sayantan .
2020 IEEE SIXTH INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA (BIGMM 2020), 2020, :461-465
[5]  
Bo Pang, 2008, Foundations and Trends in Information Retrieval, V2, P1, DOI 10.1561/1500000001
[6]   IEMOCAP: interactive emotional dyadic motion capture database [J].
Busso, Carlos ;
Bulut, Murtaza ;
Lee, Chi-Chun ;
Kazemzadeh, Abe ;
Mower, Emily ;
Kim, Samuel ;
Chang, Jeannette N. ;
Lee, Sungbok ;
Narayanan, Shrikanth S. .
LANGUAGE RESOURCES AND EVALUATION, 2008, 42 (04) :335-359
[7]  
Chen FH, 2015, IEEE INT CON MULTI
[8]  
Devlin J, 2019, Arxiv, DOI arXiv:1810.04805
[9]  
Dong H., 2024, EUROPEAN C COMPUTER, P270
[10]  
Dosovitskiy A, 2021, Arxiv, DOI arXiv:2010.11929