Unsupervised Cross-Domain Rumor Detection with Contrastive Learning and Cross-Attention

被引:0
作者
Ran, Hongyan [1 ]
Jia, Caiyan [1 ]
机构
[1] Beijing Jiaotong Univ, Sch Comp & Informat Technol, Beijing 100044, Peoples R China
来源
THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11 | 2023年
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
PROPAGATION; NETWORK;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Massive rumors usually appear along with breaking news or trending topics, seriously hindering the truth. Existing rumor detection methods are mostly focused on the same do-main, thus have poor performance in cross-domain scenarios due to domain shift. In this work, we propose an end-to-end instance-wise and prototype-wise contrastive learning model with cross-attention mechanism for cross-domain rumor detection. The model not only performs cross-domain feature alignment, but also enforces target samples to align with the corresponding prototypes of a given source domain. Since target labels in a target domain are unavailable, we use a clustering-based approach with carefully initialized centers by a batch of source domain samples to produce pseudo labels. Moreover, we use a cross-attention mechanism on a pair of source data and target data with the same labels to learn domain-invariant representations. Because the samples in a domain pair tend to express similar semantic patterns especially on the people's attitudes (e.g., supporting or denying) towards the same category of rumors, the discrepancy be-tween a pair of source domain and target domain will be de-creased. We conduct experiments on four groups of cross-domain datasets and show that our proposed model achieves state-of-the-art performance.
引用
收藏
页码:13510 / 13518
页数:9
相关论文
共 43 条
  • [1] A theory of learning from different domains
    Ben-David, Shai
    Blitzer, John
    Crammer, Koby
    Kulesza, Alex
    Pereira, Fernando
    Vaughan, Jennifer Wortman
    [J]. MACHINE LEARNING, 2010, 79 (1-2) : 151 - 175
  • [2] Bian T, 2020, AAAI CONF ARTIF INTE, V34, P549
  • [3] Castillo C., 2011, 20 INT C WORLD WID W, P675, DOI DOI 10.1145/1963405.1963500
  • [4] Chen T, 2020, PR MACH LEARN RES, V119
  • [5] Du CN, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P4019
  • [6] Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain Adaptation
    Du, Zhekai
    Li, Jingjing
    Su, Hongzu
    Zhu, Lei
    Lu, Ke
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3936 - 3945
  • [7] Ganin Y, 2015, PR MACH LEARN RES, V37, P1180
  • [8] Gao TY, 2021, 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), P6894
  • [9] Ghosal D, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P3198
  • [10] Momentum Contrast for Unsupervised Visual Representation Learning
    He, Kaiming
    Fan, Haoqi
    Wu, Yuxin
    Xie, Saining
    Girshick, Ross
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, : 9726 - 9735