Exploiting Pre-Trained Language Models for Black-Box Attack against Knowledge Graph Embeddings

被引:0
|
作者
Yang, Guangqian [1 ]
Zhang, Lei [1 ]
Liu, Yi [2 ]
Xie, Hongtao [1 ]
Mao, Zhendong [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Peoples Daily Online, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Knowledge Graph; Adversarial Attack; Language Model;
D O I
10.1145/3688850
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite the emerging research on adversarial attacks against knowledge graph embedding (KGE) models, most of them focus on white-box attack settings. However, white-box attacks are difficult to apply in practice compared to black-box attacks since they require access to model parameters that are unlikely to be provided. In this article, we propose a novel black-box attack method that only requires access to knowledge graph data, making it more realistic in real-world attack scenarios. Specifically, we utilize pre-trained language models (PLMs) to encode text features of the knowledge graphs, an aspect neglected by previous research. We then employ these encoded text features to identify the most influential triples for constructing corrupted triples for the attack. To improve the transferability of the attack, we further propose to fine-tune the PLM model by enriching triple embeddings with structure information. Extensive experiments conducted on two knowledge graph datasets illustrate the effectiveness of our proposed method.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Assisted Process Knowledge Graph Building Using Pre-trained Language Models
    Bellan, Patrizio
    Dragoni, Mauro
    Ghidini, Chiara
    AIXIA 2022 - ADVANCES IN ARTIFICIAL INTELLIGENCE, 2023, 13796 : 60 - 74
  • [2] KG-prompt: Interpretable knowledge graph prompt for pre-trained language models
    Chen, Liyi
    Liu, Jie
    Duan, Yutai
    Wang, Runze
    KNOWLEDGE-BASED SYSTEMS, 2025, 311
  • [3] Interpretable Biomedical Reasoning via Deep Fusion of Knowledge Graph and Pre-trained Language Models
    Xu Y.
    Yang Z.
    Lin Y.
    Hu J.
    Dong S.
    Beijing Daxue Xuebao (Ziran Kexue Ban)/Acta Scientiarum Naturalium Universitatis Pekinensis, 2024, 60 (01): : 62 - 70
  • [4] ProSide: Knowledge Projector and Sideway for Pre-trained Language Models
    He, Chaofan
    Lu, Gewei
    Shen, Liping
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT II, NLPCC 2024, 2025, 15360 : 56 - 68
  • [5] An Extensive Study on Adversarial Attack against Pre-trained Models of Code
    Du, Xiaohu
    Wen, Ming
    Wei, Zichao
    Wang, Shangwen
    Jin, Hai
    PROCEEDINGS OF THE 31ST ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2023, 2023, : 489 - 501
  • [6] Natural Attack for Pre-trained Models of Code
    Yang, Zhou
    Shi, Jieke
    He, Junda
    Lo, David
    2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2022), 2022, : 1482 - 1493
  • [7] NMT Enhancement based on Knowledge Graph Mining with Pre-trained Language Model
    Yang, Hao
    Qin, Ying
    Deng, Yao
    Wang, Minghan
    2020 22ND INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT): DIGITAL SECURITY GLOBAL AGENDA FOR SAFE SOCIETY!, 2020, : 185 - 189
  • [8] Black-box Adversarial Attack and Defense on Graph Neural Networks
    Li, Haoyang
    Di, Shimin
    Li, Zijian
    Chen, Lei
    Cao, Jiannong
    2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022), 2022, : 1017 - 1030
  • [9] ReLMKG: reasoning with pre-trained language models and knowledge graphs for complex question answering
    Xing Cao
    Yun Liu
    Applied Intelligence, 2023, 53 : 12032 - 12046
  • [10] CokeBERT: Contextual knowledge selection and embedding towards enhanced pre-trained language models
    Su, Yusheng
    Han, Xu
    Zhang, Zhengyan
    Lin, Yankai
    Li, Peng
    Liu, Zhiyuan
    Zhou, Jie
    Sun, Maosong
    AI OPEN, 2021, 2 : 127 - 134