Denoising Structure against Adversarial Attacks on Graph Representation Learning

被引:0
作者
Chen, Na [1 ]
Li, Ping [1 ,2 ]
Huang, Jincheng [1 ,3 ]
Zhang, Kai [4 ]
机构
[1] School of Computer Science and Software Engineering, Southwest Petroleum University, Chengdu
[2] Institute of Artificial Intelligence, Southwest Petroleum University, Chengdu
[3] University of Electronic Science and Technology, Chengdu
[4] School of Computer Science and Technology, East China Normal University, Shanghai
基金
中国国家自然科学基金;
关键词
Adversarial Attacks; Graph Convolutional Networks; Node classification; Robustness;
D O I
10.1145/3714428
中图分类号
学科分类号
摘要
Despite their excellent performance in graph representation learning, graph convolutional networks have been proved to be vulnerable to adversarial perturbations on the connectivity between nodes in an unnoticed manner. In this work, by looking into the impacts of adversarial attacks on graph data, we empirically find that the dominant edge-addition attacks generally increase the heterophily between connected nodes, which will fool the transductive inference models on node classification task. To defend against such attacks, we develop a Two-Stage Denoising (TSD) method that aims at removing possible malicious edges so as to mitigate the heterophily issue introduced by attacks. In particular, after a rough removal of the links that have quite low feature similarity, our method further spots the potentially heterophilous links by predicting node labels with a multi-view labeling consensus. This design is based on assumption that if the label predictions for the same node from two different views of a graph data are consistent, then we have a high chance to acquire the reliable labeling. The experiments demonstrate that by denoising a graph this way, the robustness of graph convolutional networks on node classification task is remarkably improved, compared to several strong competitive robust graph neural network models. © 2025 Copyright held by the owner/author(s). Publication rights licensed to ACM.
引用
收藏
相关论文
共 48 条
[1]  
Chen J., Zhang J., Xu X., Fu C., Zhang D., Zhang Q., Xuan Q., E-lstm-d: A deep learning framework for dynamic network link prediction, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 51, 6, pp. 3699-3712, (2019)
[2]  
Chen L., Li J., Peng Q., Liu Y., Zheng Z., Yang C., Understanding structural vulnerability in graph convolutional networks, Proceedings of the 30th International Joint Conference on Artificial Intelligence, pp. 2249-2255, (2021)
[3]  
Cosentino V., Luis J., Cabot J., Findings from GitHub: Methods, datasets and limitations, Proceedings of the 13th International Conference on Mining Software Repositories, pp. 137-141, (2016)
[4]  
Dai H., Li H., Tian T., Huang X., Wang L., Zhu J., Song L., Adversarial attack on graph structured data, Proceedings of the 35th International Conference on Machine Learning, pp. 1115-1124, (2018)
[5]  
Deshpande Y., Sen S., Montanari A., Mossel E., Contextual stochastic block models, Advances in Neural Information Processing Systems, 31, (2018)
[6]  
Ding Y., Yan E., Frazho A., Caverlee J., PageRank for ranking authors in co-citation networks, Journal of the American Society for Information Science and Technology, 60, 11, pp. 2229-2243, (2009)
[7]  
Entezari N., Al-Sayouri S.A., Darvishzadeh A., Papalexakis E.E., All you need is low (rank): Defending against adversarial attacks on graphs, Proceedings of the 13th International Conference on Web Search and Data Mining, pp. 169-177, (2020)
[8]  
Eswaran D., Gunnemann S., Faloutsos C., Makhija D., Kumar M., ZooBP: Belief propagation for heterogeneous networks, Proceedings of the VLDB Endowment, 10, pp. 625-636, (2017)
[9]  
Fan W., Ma Y., Li Q., He Y., Zhao Y., Tang J., Yin D., Graph neural networks for social recommendation, Proceedings of the the World Wide Web Conference, pp. 417-426, (2019)
[10]  
Feng B., Wang Y., Ding Y., Uag: Uncertainty-aware attention graph neural network for defending adversarial attacks, Proceedings of the 35th AAAI Conference on Artificial Intelligence, pp. 7404-7412, (2021)