Neural entity alignment with cross-modal supervision

被引:9
|
作者
Su, Fenglong [1 ]
Xu, Chengjin [2 ]
Yang, Han [3 ]
Chen, Zhongwu [1 ]
Jing, Ning [1 ]
机构
[1] Natl Univ Def Technol, Changsha, Peoples R China
[2] Int Digital Econ Acad, Shenzhen, Peoples R China
[3] Peking Univ, Beijing, Peoples R China
关键词
Knowledge graph alignment; Supporting knowledge; Relational attention network; Semi-supervised learning;
D O I
10.1016/j.ipm.2022.103174
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The majority of currently available entity alignment (EA) solutions primarily rely on structural information to align entities, which is biased and disregards additional multi-source information. To compensate for inadequate structural details, this article suggests the SKEA framework, which is a simple but flexible framework for Entity Alignment with cross-modal supervision of Supporting Knowledge. We employ a relational aggregate network to specifically utilize the details about the entity and its neighbors. To overcome the limitations of relational features, two multi-modal encode modules are being used to extract visual and textural information. A new set of potential aligned entity pairs are generated by SKEA in each iteration using the knowledge of two reference modalities, which can enhance the model's supervision. It is important to note that the supporting information used in our framework does not participate in the network's backpropagation, which considerably improves efficiency and differs dramatically from earlier work. In comparison to existing baselines, experiments demonstrate that our proposed framework can incorporate multi-aspect information efficiently and enable supervisory signals from other modalities to transmit to entities. The maximum performance improvement of 5.24% indicates our suggested framework's superiority, especially for sparse KGs.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Cross-Modal Graph Attention Network for Entity Alignment
    Xu, Baogui
    Xu, Chengjin
    Su, Bing
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 3715 - 3723
  • [2] Learning Visual Locomotion with Cross-Modal Supervision
    Loquercio, Antonio
    Kumar, Ashish
    Malik, Jitendra
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 7295 - 7302
  • [3] Neural Machine Translation Method Based on Cross-modal Entity Information Fusion
    Huang X.
    Zhang J.-J.
    Zong C.-Q.
    Zidonghua Xuebao/Acta Automatica Sinica, 2023, 49 (06): : 1170 - 1180
  • [4] Survey on Cross-modal Data Entity Resolution
    Cao J.-J.
    Nie Z.-B.
    Zheng Q.-B.
    Lü G.-J.
    Zeng Z.-X.
    Ruan Jian Xue Bao/Journal of Software, 2023, 34 (12): : 5822 - 5847
  • [5] Token Embeddings Alignment for Cross-Modal Retrieval
    Xie, Chen-Wei
    Wu, Jianmin
    Zheng, Yun
    Pan, Pan
    Hua, Xian-Sheng
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 4555 - 4563
  • [6] Cross-modal Variational Alignment of Latent Spaces
    Theodoridis, Thomas
    Chatzis, Theocharis
    Solachidis, Vassilios
    Dimitropoulos, Kosmas
    Daras, Petros
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 4127 - 4136
  • [7] Adequate alignment and interaction for cross-modal retrieval
    Mingkang WANG
    Min MENG
    Jigang LIU
    Jigang WU
    虚拟现实与智能硬件(中英文), 2023, 5 (06) : 509 - 522
  • [8] Cross-Modal Translation and Alignment for Survival Analysis
    Zhou, Fengtao
    Chen, Hao
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 21428 - 21437
  • [9] Robust cross-modal retrieval with alignment refurbishment
    Guo, Jinyi
    Ding, Jieyu
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2023, 24 (10) : 1403 - 1415
  • [10] Deep Discrete Cross-Modal Hashing with Multiple Supervision
    Yu, En
    Ma, Jianhua
    Sun, Jiande
    Chang, Xiaojun
    Zhang, Huaxiang
    Hauptmann, Alexander G.
    NEUROCOMPUTING, 2022, 486 : 215 - 224