On the Explainability of Graph Convolutional Network With GCN Tangent Kernel

被引:1
作者
Zhou, Xianchen [1 ]
Wang, Hongxia [1 ]
机构
[1] Natl Univ Def Technol, Changsha, Peoples R China
基金
中国国家自然科学基金;
关键词
Convolutional networks - Feature dimensions - Graph data - Graph neural networks - Kernel models - Semi-supervised - Simple++ - Toy models;
D O I
10.1162/neco_a_01548
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph convolutional network (GCN) is a powerful deep model in dealing with graph data. However, the explainability of GCN remains a difficult problem since the training behaviors for graph neural networks are hard to describe. In this work, we show that for GCN with wide hidden feature dimension, the output for semisupervised problem can be described by a simple differential equation. In addition, the dynamics of the behavior of output is decided by the graph convolutional neural tangent kernel (GCNTK), which is stable when the width of hidden feature tends to be infinite. And the solution of node classification can be explained directly by the differential equation for a semisupervised problem. The experiments on some toy models speak to the consistency of the GCNTK model and GCN.
引用
收藏
页码:1 / 26
页数:26
相关论文
共 42 条
  • [1] Allen-Zhu Z, 2019, PR MACH LEARN RES, V97
  • [2] Arora S, 2019, 33 C NEURAL INFORM P, V32
  • [3] Baldassarre F, 2019, Arxiv, DOI arXiv:1905.13686
  • [4] Bartlett P., 2018, INT C MACHINE LEARNI, P521
  • [5] Chen ZD, 2019, Arxiv, DOI arXiv:1905.12560
  • [6] de G Matthews A.G., 2018, P INT C LEARNING REP
  • [7] Du SS, 2019, ADV NEUR IN, V32
  • [8] Du SS, 2019, 36 INT C MACHINE LEA, V97
  • [9] Franceschi J.-Y., 2021, ARXIV
  • [10] Funke T, 2021, Hard Masking for Explaining Graph Neural Networks