How does Heterophily Impact the Robustness of Graph Neural Networks? Theoretical Connections and Practical Implications

被引:17
作者
Zhu, Jiong [1 ]
Jin, Junchen [2 ]
Loveland, Donald [1 ]
Schaub, Michael T. [3 ]
Koutra, Danai [1 ]
机构
[1] Univ Michigan, Ann Arbor, MI 48109 USA
[2] Northwestern Univ, Evanston, IL 60208 USA
[3] Rhein Westfal TH Aachen, Aachen, Germany
来源
PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022 | 2022年
基金
美国国家科学基金会;
关键词
graph neural networks; adversarial attacks; heterophily; structural perturbation; robustness; relation;
D O I
10.1145/3534678.3539418
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We bridge two research directions on graph neural networks (GNNs), by formalizing the relation between heterophily of node labels (i.e., connected nodes tend to have dissimilar labels) and the robustness of GNNs to adversarial attacks. Our theoretical and empirical analyses show that for homophilous graph data, impactful structural attacks always lead to reduced homophily, while for heterophilous graph data the change in the homophily level depends on the node degrees. These insights have practical implications for defending against attacks on real-world graphs: we deduce that separate aggregators for ego- and neighbor-embeddings, a design principle which has been identified to significantly improve prediction for heterophilous graph data, can also offer increased robustness to GNNs. Our comprehensive experiments show that GNNs merely adopting this design achieve improved empirical and certifiable robustness compared to the best-performing unvaccinated model. Additionally, combining this design with explicit defense mechanisms against adversarial attacks leads to an improved robustness with up to 18.33% performance increase under attacks compared to the best-performing vaccinated model.
引用
收藏
页码:2637 / 2647
页数:11
相关论文
共 52 条
[1]  
Alipourfard N., 2019, ICML, P21
[2]  
Bo DY, 2021, AAAI CONF ARTIF INTE, V35, P3950
[3]  
Bojchevski A., 2020, ICML
[4]  
Bojchevski A, 2019, ADV NEUR IN, V32
[5]  
Bojchevski Aleksandar, 2019, ICML
[6]   Geometric Deep Learning Going beyond Euclidean data [J].
Bronstein, Michael M. ;
Bruna, Joan ;
LeCun, Yann ;
Szlam, Arthur ;
Vandergheynst, Pierre .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (04) :18-42
[7]  
Chang Heng, 2020, AAAI
[8]  
Chien Eli, 2021, ICLR
[9]  
Cohen J., 2019, Certified Adversarial Robustness via Randomized Smoothing, P1310
[10]  
Dai HJ, 2018, PR MACH LEARN RES, V80