ScatterSample: Diversified Label Sampling for Data Efficient Graph Neural Network Learning

被引:0
作者
Dai, Zhenwei [1 ]
Ioannidis, Vasileios [2 ]
Adeshina, Soji [2 ]
Jost, Zak [2 ]
Faloutsos, Christos [3 ]
Karypis, George [2 ]
机构
[1] Rice Univ, Dept Stat, Houston, TX 77251 USA
[2] Amazon Web Serv, Seattle, WA USA
[3] Carnegie Mellon Univ, Dept Comp Sci, Pittsburgh, PA 15213 USA
来源
LEARNING ON GRAPHS CONFERENCE, VOL 198 | 2022年 / 198卷
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
What target labels are most effective for graph neural network (GNN) training? In some applications where GNNs excel-like drug design or fraud detection, labeling new instances is expensive. We develop a data-efficient active sampling framework, ScatterSample, to train GNNs under an active learning setting. ScatterSample employs a sampling module termed DiverseUncertainty to collect instances with large uncertainty from different regions of the sample space for labeling. To ensure diversification of the selected nodes, DiverseUncertainty clusters the high uncertainty nodes and selects the representative nodes from each cluster. Our ScatterSample algorithm is further supported by rigorous theoretical analysis demonstrating its advantage compared to standard active sampling methods that aim to simply maximize the uncertainty and not diversify the samples. In particular, we show that ScatterSample is able to efficiently reduce the model uncertainty over the whole sample space. Our experiments on five datasets show that ScatterSample significantly outperforms the other GNN active learning baselines, specifically it reduces the sampling cost by up to 50% while achieving the same test accuracy.
引用
收藏
页数:15
相关论文
共 23 条
[1]  
[Anonymous], 2002, LEARNING LABELED UNL
[2]  
Arthur D, 2007, PROCEEDINGS OF THE EIGHTEENTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, P1027
[3]   Geometric Deep Learning Going beyond Euclidean data [J].
Bronstein, Michael M. ;
Bruna, Joan ;
LeCun, Yann ;
Szlam, Arthur ;
Vandergheynst, Pierre .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (04) :18-42
[4]  
Cai HY, 2017, Arxiv, DOI arXiv:1705.05085
[5]  
Citovsky G, 2021, ADV NEUR IN, V34
[6]  
Defferrard M, 2016, ADV NEUR IN, V29
[7]  
Ducoffe M, 2018, Arxiv, DOI arXiv:1802.09841
[8]  
Gao L, 2018, PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2142
[9]  
Hamilton WL, 2017, ADV NEUR IN, V30
[10]   Theory of Disagreement-Based Active Learning [J].
Hanneke, Steve .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2014, 7 (2-3) :131-309