Membership Inference Attacks Against Robust Graph Neural Network

被引:4
作者
Liu, Zhengyang [1 ]
Zhang, Xiaoyu [1 ]
Chen, Chenyang [1 ]
Lin, Shen [1 ]
Li, Jingjin [2 ]
机构
[1] Xidian Univ, State Key Lab Integrated Serv Networks ISN, Xian 710071, Peoples R China
[2] James Cook Univ, Coll Sci & Engn, Townsville, Qld 4811, Australia
来源
CYBERSPACE SAFETY AND SECURITY, CSS 2022 | 2022年 / 13547卷
关键词
Graph neural network; Membership inference attack; Robust model; Adversarial training;
D O I
10.1007/978-3-031-18067-5_19
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the rapid development of neural network technologies in machine learning, neural networks are widely used in artificial intelligence tasks. Due to the widespread existence of graph data, graph neural networks, a kind of neural network specializing in processing graph data, has become a research hotspot. This paper firstly studies the relationship between adversarial attacks and privacy attacks on graphs, i.e., whether a robust model trained on graph adversarial can improve the attack effect of graph membership inference attacks. We also find the different performance of the robust model's loss function on the training set and the test set is a critical reason for the increasing membership inference attack success rate. Extensive experimental evaluations on Cora, Cora-ml, Citeseer, Polblogs and Pubmed demonstrate that the robust model obtained by adversarial training can significantly improve the attack success rate of membership inference attacks.
引用
收藏
页码:259 / 273
页数:15
相关论文
共 23 条
[1]  
Chakraborty A, 2018, Arxiv, DOI arXiv:1810.00069
[2]  
Dai HJ, 2018, Arxiv, DOI [arXiv:1806.02371, DOI 10.48550/ARXIV.1806.02371]
[3]  
Hamilton WL, 2017, ADV NEUR IN, V30
[4]  
He XL, 2021, Arxiv, DOI arXiv:2102.05429
[5]  
Iyiola E., 2021, arXiv
[6]  
Jin H., 2019, ICML WORKSHOP LEARNI
[7]   Machine learning: Trends, perspectives, and prospects [J].
Jordan, M. I. ;
Mitchell, T. M. .
SCIENCE, 2015, 349 (6245) :255-260
[8]  
Kipf T. N., 2017, ICLR, P1, DOI https://doi.org/10.48550/arXiv.1609.02907
[9]   Secure multiparty learning from the aggregation of locally trained models [J].
Ma, Xu ;
Ji, Cunmei ;
Zhang, Xiaoyu ;
Wang, Jianfeng ;
Li, Jin ;
Li, Kuan-Ching ;
Chen, Xiaofeng .
JOURNAL OF NETWORK AND COMPUTER APPLICATIONS, 2020, 167
[10]   Exploiting Unintended Feature Leakage in Collaborative Learning [J].
Melis, Luca ;
Song, Congzheng ;
De Cristofaro, Emiliano ;
Shmatikov, Vitaly .
2019 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2019), 2019, :691-706