A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability

被引:12
作者
Dai, Enyan [1 ]
Zhao, Tianxiang [1 ]
Zhu, Huaisheng [1 ]
Xu, Junjie [1 ]
Guo, Zhimeng [1 ]
Liu, Hui [2 ]
Tang, Jiliang [2 ]
Wang, Suhang [1 ]
机构
[1] Penn State Univ, State Coll, PA 16801 USA
[2] Michigan State Univ, E Lansing, MI 48824 USA
基金
美国国家科学基金会;
关键词
Graph neural networks (GNNs); trustworthy; privacy; robustness; fairness; explainability; CLASSIFICATION;
D O I
10.1007/s11633-024-1510-8
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Graph neural networks (GNNs) have made rapid developments in the recent years. Due to their great ability in modeling graph-structured data, GNNs are vastly used in various applications, including high-stakes scenarios such as financial analysis, traffic predictions, and drug discovery. Despite their great potential in benefiting humans in the real world, recent study shows that GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data and lack interpretability, which have risk of causing unintentional harm to the users and society. For example, existing works demonstrate that attackers can fool the GNNs to give the outcome they desire with unnoticeable perturbation on training graph. GNNs trained on social networks may embed the discrimination in their decision process, strengthening the undesirable societal bias. Consequently, trust-worthy GNNs in various aspects are emerging to prevent the harm from GNN models and increase the users' trust in GNNs. In this paper, we give a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability. For each aspect, we give the taxonomy of the related methods and formulate the general frameworks for the multiple categories of trustworthy GNNs. We also discuss the future research directions of each aspect and connections between these aspects to help achieve trustworthiness.
引用
收藏
页码:1011 / 1061
页数:51
相关论文
共 296 条
[51]   Adversarial Training Methods for Network Embedding [J].
Dai, Quanyu ;
Shen, Xiao ;
Zhang, Liang ;
Li, Qiang ;
Wang, Dan .
WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019), 2019, :329-339
[52]  
Danilevsky M, 2020, 1ST CONFERENCE OF THE ASIA-PACIFIC CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 10TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (AACL-IJCNLP 2020), P447
[53]   STRUCTURE ACTIVITY RELATIONSHIP OF MUTAGENIC AROMATIC AND HETEROAROMATIC NITRO-COMPOUNDS - CORRELATION WITH MOLECULAR-ORBITAL ENERGIES AND HYDROPHOBICITY [J].
DEBNATH, AK ;
DECOMPADRE, RLL ;
DEBNATH, G ;
SHUSTERMAN, AJ ;
HANSCH, C .
JOURNAL OF MEDICINAL CHEMISTRY, 1991, 34 (02) :786-797
[54]  
Deng CH, 2022, PR MACH LEARN RES, V198
[55]   Batch virtual adversarial training for graph convolutional networks [J].
Deng, Zhijie ;
Dong, Yinpeng ;
Zhu, Jun .
AI OPEN, 2023, 4 :73-79
[56]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[57]   Fairness in Graph Mining: A Survey [J].
Dong, Yushun ;
Ma, Jing ;
Wang, Song ;
Chen, Chen ;
Li, Jundong .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (10) :10583-10602
[58]  
Dong YS, 2023, AAAI CONF ARTIF INTE, P7441
[59]   On Structural Explanation of Bias in Graph Neural Networks [J].
Dong, Yushun ;
Wang, Song ;
Wang, Yu ;
Derr, Tyler ;
Li, Jundong .
PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, :316-326
[60]   EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks [J].
Dong, Yushun ;
Liu, Ninghao ;
Jalaian, Brian ;
Li, Jundong .
PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, :1259-1269