Continual Learning on Dynamic Graphs via Parameter Isolation

被引:13
作者
Zhang, Peiyan [1 ]
Yan, Yuchen [3 ]
Li, Chaozhuo [4 ]
Wang, Senzhang [2 ]
Xie, Xing [4 ]
Song, Guojie [3 ]
Kim, Sunghun [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
[2] Cent South Univ, Changsha, Peoples R China
[3] Peking Univ, Sch Intelligence Sci & Technol, Beijing, Peoples R China
[4] Microsoft Res Asia, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023 | 2023年
关键词
Graph neural networks; Continual learning; Streaming networks;
D O I
10.1145/3539618.3591652
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Many real-world graph learning tasks require handling dynamic graphs where new nodes and edges emerge. Dynamic graph learning methods commonly suffer from the catastrophic forgetting problem, where knowledge learned for previous graphs is overwritten by updates for new graphs. To alleviate the problem, continual graph learning methods are proposed. However, existing continual graph learning methods aim to learn new patterns and maintain old ones with the same set of parameters of fixed size, and thus face a fundamental tradeoff between both goals. In this paper, we propose Parameter Isolation GNN (PI-GNN) for continual learning on dynamic graphs that circumvents the tradeoff via parameter isolation and expansion. Our motivation lies in that different parameters contribute to learning different graph patterns. Based on the idea, we expand model parameters to continually learn emerging graph patterns. Meanwhile, to effectively preserve knowledge for unaffected patterns, we find parameters that correspond to them via optimization and freeze them to prevent them from being rewritten. Experiments on eight real-world datasets corroborate the effectiveness of PI-GNN compared to state-of-the-art baselines.
引用
收藏
页码:601 / 611
页数:11
相关论文
共 61 条
[31]   Automating the construction of internet portals with machine learning [J].
McCallum, AK ;
Nigam, K ;
Rennie, J ;
Seymore, K .
INFORMATION RETRIEVAL, 2000, 3 (02) :127-163
[32]  
McCloskey Michael, 1989, Learning and Motivation, V24, P109, DOI 10.1016/S0079-7421(08)60536-8
[33]  
Kipf TN, 2017, Arxiv, DOI arXiv:1609.02907
[34]   Improving Relevance Modeling via Heterogeneous Behavior Graph Learning in Bing Ads [J].
Pang, Bochen ;
Li, Chaozhuo ;
Liu, Yuming ;
Lian, Jianxun ;
Zhao, Jianan ;
Sun, Hao ;
Deng, Weiwei ;
Xie, Xing ;
Zhang, Qi .
PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, :3713-3721
[35]  
Pareja A, 2020, AAAI CONF ARTIF INTE, V34, P5363
[36]   DeepWalk: Online Learning of Social Representations [J].
Perozzi, Bryan ;
Al-Rfou, Rami ;
Skiena, Steven .
PROCEEDINGS OF THE 20TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD'14), 2014, :701-710
[37]  
Ramesh Rahul, 2021, NEURIPS 2021 WORKSHO
[38]   CONNECTIONIST MODELS OF RECOGNITION MEMORY - CONSTRAINTS IMPOSED BY LEARNING AND FORGETTING FUNCTIONS [J].
RATCLIFF, R .
PSYCHOLOGICAL REVIEW, 1990, 97 (02) :285-308
[39]   iCaRL: Incremental Classifier and Representation Learning [J].
Rebuffi, Sylvestre-Alvise ;
Kolesnikov, Alexander ;
Sperl, Georg ;
Lampert, Christoph H. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :5533-5542
[40]  
Rolnick D, 2019, Arxiv, DOI arXiv:1811.11682