Interactive Contrastive Learning for Self-Supervised Entity Alignment

被引:17
作者
Zeng, Kaisheng [1 ]
Dong, Zhenhao [2 ]
Hou, Lei [3 ]
Cao, Yixin [4 ]
Hu, Minghao [5 ]
Yu, Jifan [1 ]
Lv, Xin [1 ]
Cao, Lei [1 ]
Wang, Xin [1 ]
Liu, Haozhuang [1 ]
Huang, Yi [6 ]
Feng, Junlan [6 ]
Wan, Jing [2 ]
Li, Juanzi [7 ]
Feng, Ling [7 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] Beijing Univ Chem Technol, Beijing, Peoples R China
[3] Tsinghua, BNRist, Dept Comp Sci & Technol, Beijing, Peoples R China
[4] Singapore Management Univ, Singapore, Singapore
[5] Informat Res Ctr Mil Sci, Beijing, Peoples R China
[6] China Mobile Res Inst, Beijing, Peoples R China
[7] Tsinghua Univ, BNRist, CST, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022 | 2022年
关键词
Knowledge Graph; Entity Alignment; Self-Supervised Learning; Contrastive Learning;
D O I
10.1145/3511808.3557364
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Self-supervised entity alignment (EA) aims to link equivalent entities across different knowledge graphs (KGs) without the use of pre-aligned entity pairs. The current state-of-the-art (SOTA) self-supervised EA approach draws inspiration from contrastive learning, originally designed in computer vision based on instance discrimination and contrastive loss, and suffers from two shortcomings. Firstly, it puts unidirectional emphasis on pushing sampled negative entities far away rather than pulling positively aligned pairs close, as is done in the well-established supervised EA. Secondly, it advocates the minimum information requirement for self-supervised EA, while we argue that self-described KG's side information (e.g., entity name, relation name, entity description) shall preferably be explored to the maximum extent for the self-supervised EA task. In this work, we propose an interactive contrastive learning model for self-supervised EA. It conducts bidirectional contrastive learning via building pseudo-aligned entity pairs as pivots to achieve direct cross-KG information interaction. It further exploits the integration of entity textual and structural information and elaborately designs encoders for better utilization in the self-supervised setting. Experimental results show that our approach outperforms the previous best self-supervised method by a large margin (over 9% Hits@1 absolute improvement on average) and performs on par with previous SOTA supervised counterparts, demonstrating the effectiveness of the interactive contrastive learning for self-supervised EA. The code and data are available at https://github.com/THU-KEG/ICLEA.
引用
收藏
页码:2465 / 2475
页数:11
相关论文
共 50 条
  • [31] Enhancing robust VQA via contrastive and self-supervised learning
    Cao, Runlin
    Li, Zhixin
    Tang, Zhenjun
    Zhang, Canlong
    Ma, Huifang
    PATTERN RECOGNITION, 2025, 159
  • [32] Self-Supervised Contrastive Learning for Volcanic Unrest Detection
    Bountos, Nikolaos Ioannis
    Papoutsis, Ioannis
    Michail, Dimitrios
    Anantrasirichai, Nantheera
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [33] CLSSATP: Contrastive learning and self-supervised learning model for aquatic toxicity prediction
    Lin, Ye
    Yang, Xin
    Zhang, Mingxuan
    Cheng, Jinyan
    Lin, Hai
    Zhao, Qi
    AQUATIC TOXICOLOGY, 2025, 279
  • [34] Self-supervised Graph-level Representation Learning with Adversarial Contrastive Learning
    Luo, Xiao
    Ju, Wei
    Gu, Yiyang
    Mao, Zhengyang
    Liu, Luchen
    Yuan, Yuhui
    Zhang, Ming
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (02)
  • [35] Self-Supervised Contrastive Learning In Spiking Neural Networks
    Bahariasl, Yeganeh
    Kheradpisheh, Saeed Reza
    PROCEEDINGS OF THE 13TH IRANIAN/3RD INTERNATIONAL MACHINE VISION AND IMAGE PROCESSING CONFERENCE, MVIP, 2024, : 181 - 185
  • [36] Self-Supervised Learning on Graphs: Contrastive, Generative, or Predictive
    Wu, Lirong
    Lin, Haitao
    Tan, Cheng
    Gao, Zhangyang
    Li, Stan Z.
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (04) : 4216 - 4235
  • [37] Malicious Traffic Identification with Self-Supervised Contrastive Learning
    Yang, Jin
    Jiang, Xinyun
    Liang, Gang
    Li, Siyu
    Ma, Zicheng
    SENSORS, 2023, 23 (16)
  • [38] Mixing up contrastive learning: Self-supervised representation learning for time series
    Wickstrom, Kristoffer
    Kampffmeyer, Michael
    Mikalsen, Karl Oyvind
    Jenssen, Robert
    PATTERN RECOGNITION LETTERS, 2022, 155 : 54 - 61
  • [39] Grouped Contrastive Learning of Self-Supervised Sentence Representation
    Wang, Qian
    Zhang, Weiqi
    Lei, Tianyi
    Peng, Dezhong
    APPLIED SCIENCES-BASEL, 2023, 13 (17):
  • [40] Self-supervised contrastive representation learning for semantic segmentation
    Liu B.
    Cai H.
    Wang Y.
    Chen X.
    Xi'an Dianzi Keji Daxue Xuebao/Journal of Xidian University, 2024, 51 (01): : 125 - 134