ATPF: An Adaptive Temporal Perturbation Framework for Adversarial Attacks on Temporal Knowledge Graph

被引:0
作者
Liao, Longquan [1 ,2 ]
Zheng, Linjiang [1 ,2 ]
Shang, Jiaxing [1 ,2 ]
Li, Xu [1 ,2 ]
Chen, Fengwen [3 ]
机构
[1] Chongqing Univ, Coll Comp Sci, Chongqing 400044, Peoples R China
[2] Chongqing Univ, Minist Educ, Key Lab Dependable Serv Comp Cyber Phys Soc, Chongqing 400044, Peoples R China
[3] Chongqing Univ, Sch Econ & Business Adm, Chongqing 400044, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Perturbation methods; Knowledge graphs; Predictive models; Adaptation models; Timing; Noise; Heuristic algorithms; Closed box; Robustness; Representation learning; Adversarial attack; deep learning; dynamic network; temporal knowledge graph;
D O I
10.1109/TKDE.2024.3510689
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Robustness is paramount for ensuring the reliability of knowledge graph models in safety-sensitive applications. While recent research has delved into adversarial attacks on static knowledge graph models, the exploration of more practical temporal knowledge graphs has been largely overlooked. To fill this gap, we present the Adaptive Temporal Perturbation Framework (ATPF), a novel adversarial attack framework aimed at probing the robustness of temporal knowledge graph (TKG) models. The general idea of ATPF is to inject perturbations into the victim model input to undermine the prediction. First, we propose the Temporal Perturbation Prioritization (TPP) algorithm, which identifies the optimal time sequence for perturbation injection before initiating attacks. Subsequently, we design the Rank-Based Edge Manipulation (RBEM) algorithm, enabling the generation of both edge addition and removal perturbations under black-box setting. With ATPF, we present two adversarial attack methods: the stringent ATPF-hard and the more lenient ATPF-soft, each imposing different perturbation constraints. Our experimental evaluations on the link prediction task for TKGs demonstrate the superior attack performance of our methods compared to baseline methods. Furthermore, we find that strategically placing a single perturbation often suffices to successfully compromise a target link.
引用
收藏
页码:1091 / 1104
页数:14
相关论文
共 50 条
  • [1] [Anonymous], 53 Wikipedia contributors, Surfactant, Wikipedia, The Free Encyclopedia, 25 January 2012, 18:26 UTC, lt
  • [2] http://en.wikipedia.org/w/index.php?titleSurfactantoldid473195506gt
  • [3] [accessed 9 February 2012].
  • [4] Bhardwaj P, 2021, Arxiv, DOI arXiv:2111.03120
  • [5] Bojchevski A, 2019, PR MACH LEARN RES, V97
  • [6] Boschee Elizabeth, 2015, HarvardDataverse, V21
  • [7] Which Companies are Likely to Invest: Knowledge-graph-based Recommendation for Investment Promotion
    Bu, Chenyang
    Zhang, Jiawei
    Yu, Xingchen
    Wu, Le
    Wu, Xindong
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, : 11 - 20
  • [8] Adversarial Attack Framework on Graph Embedding Models With Limited Knowledge
    Chang, Heng
    Rong, Yu
    Xu, Tingyang
    Huang, Wenbing
    Zhang, Honglei
    Cui, Peng
    Wang, Xin
    Zhu, Wenwu
    Huang, Junzhou
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (05) : 4499 - 4513
  • [9] Time-Aware Gradient Attack on Dynamic Network Link Prediction
    Chen, Jinyin
    Zhang, Jian
    Chen, Zhi
    Du, Min
    Xuan, Qi
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (02) : 2091 - 2102
  • [10] Chen JY, 2018, Arxiv, DOI arXiv:1810.01110