Exploiting Pre-Trained Language Models for Black-Box Attack against Knowledge Graph Embeddings

被引:0
|
作者
Yang, Guangqian [1 ]
Zhang, Lei [1 ]
Liu, Yi [2 ]
Xie, Hongtao [1 ]
Mao, Zhendong [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Peoples Daily Online, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
Knowledge Graph; Adversarial Attack; Language Model;
D O I
10.1145/3688850
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite the emerging research on adversarial attacks against knowledge graph embedding (KGE) models, most of them focus on white-box attack settings. However, white-box attacks are difficult to apply in practice compared to black-box attacks since they require access to model parameters that are unlikely to be provided. In this article, we propose a novel black-box attack method that only requires access to knowledge graph data, making it more realistic in real-world attack scenarios. Specifically, we utilize pre-trained language models (PLMs) to encode text features of the knowledge graphs, an aspect neglected by previous research. We then employ these encoded text features to identify the most influential triples for constructing corrupted triples for the attack. To improve the transferability of the attack, we further propose to fine-tune the PLM model by enriching triple embeddings with structure information. Extensive experiments conducted on two knowledge graph datasets illustrate the effectiveness of our proposed method.
引用
收藏
页数:14
相关论文
共 50 条
  • [21] Billion-scale pre-trained knowledge graph model for conversational chatbot
    Wong, Chi-Man
    Feng, Fan
    Zhang, Wen
    Chen, Huajun
    Vong, Chi-Man
    Chen, Chuangquan
    NEUROCOMPUTING, 2024, 606
  • [22] A Pre-trained Universal Knowledge Graph Reasoning Model Based on Rule Prompts
    Cui, Yuanning
    Sun, Zequn
    Hu, Wei
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (08): : 2030 - 2044
  • [23] ViHealthBERT: Pre-trained Language Models for Vietnamese in Health Text Mining
    Minh Phuc Nguyen
    Vu Hoang Tran
    Vu Hoang
    Ta Duc Huy
    Bui, Trung H.
    Truong, Steven Q. H.
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 328 - 337
  • [24] Boosting Black-Box Attack to Deep Neural Networks With Conditional Diffusion Models
    Liu, Renyang
    Zhou, Wei
    Zhang, Tianwei
    Chen, Kangjie
    Zhao, Jun
    Lam, Kwok-Yan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 5207 - 5219
  • [25] Grounding Dialogue Systems via Knowledge Graph Aware Decoding with Pre-trained Transformers
    Chaudhuri, Debanjan
    Rony, Md Rashad Al Hasan
    Lehmann, Jens
    SEMANTIC WEB, ESWC 2021, 2021, 12731 : 323 - 339
  • [26] A Comparative Study of Using Pre-trained Language Models for Toxic Comment Classification
    Zhao, Zhixue
    Zhang, Ziqi
    Hopfgartner, Frank
    WEB CONFERENCE 2021: COMPANION OF THE WORLD WIDE WEB CONFERENCE (WWW 2021), 2021, : 500 - 507
  • [27] Probing language identity encoded in pre-trained multilingual models: a typological view
    Zheng, Jianyu
    Liu, Ying
    PEERJ COMPUTER SCIENCE, 2022, 7
  • [28] Probing language identity encoded in pre-trained multilingual models: a typological view
    Zheng J.
    Liu Y.
    PeerJ Computer Science, 2022, 8
  • [29] A complex network approach to analyse pre-trained language models for ancient Chinese
    Zheng, Jianyu
    Xiao, Xin'ge
    ROYAL SOCIETY OPEN SCIENCE, 2024, 11 (05):
  • [30] attackGAN: Adversarial Attack against Black-box IDS using Generative Adversarial Networks
    Zhao, Shuang
    Li, Jing
    Wang, Jianmin
    Zhang, Zhao
    Zhu, Lin
    Zhang, Yong
    2020 INTERNATIONAL CONFERENCE ON IDENTIFICATION, INFORMATION AND KNOWLEDGE IN THE INTERNET OF THINGS (IIKI2020), 2021, 187 : 128 - 133