Graph Neural Network Bandits

被引:0
作者
Kassraie, Parnian [1 ]
Krause, Andreas [1 ]
Bogunovic, Ilija [2 ]
机构
[1] Swiss Fed Inst Technol, Zurich, Switzerland
[2] UCL, London, England
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022 | 2022年
基金
欧洲研究理事会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the bandit optimization problem with the reward function defined over graph-structured data. This problem has important applications in molecule design and drug discovery, where the reward is naturally invariant to graph permutations. The key challenges in this setting are scaling to large domains, and to graphs with many nodes. We resolve these challenges by embedding the permutation invariance into our model. In particular, we show that graph neural networks (GNNs) can be used to estimate the reward function, assuming it resides in the Reproducing Kernel Hilbert Space of a permutation-invariant additive kernel. By establishing a novel connection between such kernels and the graph neural tangent kernel (GNTK), we introduce the first GNN confidence bound and use it to design a phased-elimination algorithm with sublinear regret. Our regret bound depends on the GNTK's maximum information gain, which we also provide a bound for. While the reward function depends on all N node features, our guarantees are independent of the number of graph nodes N. Empirically, our approach exhibits competitive performance and scales well on graph-structured domains.
引用
收藏
页数:13
相关论文
共 45 条
  • [41] Vakili Sattar, 2021, INT C ART INT STAT
  • [42] Valko M, 2014, PR MACH LEARN RES, V32, P46
  • [43] Valko Michal, 2013, C UNC ART INT
  • [44] ZHANG Weitong, 2021, INT C LEARN REPR
  • [45] Zhou Dongruo, 2020, INT C MACH LEARN