Continual Learning on Dynamic Graphs via Parameter Isolation

被引:10
作者
Zhang, Peiyan [1 ]
Yan, Yuchen [3 ]
Li, Chaozhuo [4 ]
Wang, Senzhang [2 ]
Xie, Xing [4 ]
Song, Guojie [3 ]
Kim, Sunghun [1 ]
机构
[1] Hong Kong Univ Sci & Technol, Hong Kong, Peoples R China
[2] Cent South Univ, Changsha, Peoples R China
[3] Peking Univ, Sch Intelligence Sci & Technol, Beijing, Peoples R China
[4] Microsoft Res Asia, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023 | 2023年
关键词
Graph neural networks; Continual learning; Streaming networks;
D O I
10.1145/3539618.3591652
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Many real-world graph learning tasks require handling dynamic graphs where new nodes and edges emerge. Dynamic graph learning methods commonly suffer from the catastrophic forgetting problem, where knowledge learned for previous graphs is overwritten by updates for new graphs. To alleviate the problem, continual graph learning methods are proposed. However, existing continual graph learning methods aim to learn new patterns and maintain old ones with the same set of parameters of fixed size, and thus face a fundamental tradeoff between both goals. In this paper, we propose Parameter Isolation GNN (PI-GNN) for continual learning on dynamic graphs that circumvents the tradeoff via parameter isolation and expansion. Our motivation lies in that different parameters contribute to learning different graph patterns. Based on the idea, we expand model parameters to continually learn emerging graph patterns. Meanwhile, to effectively preserve knowledge for unaffected patterns, we find parameters that correspond to them via optimization and freeze them to prevent them from being rewritten. Experiments on eight real-world datasets corroborate the effectiveness of PI-GNN compared to state-of-the-art baselines.
引用
收藏
页码:601 / 611
页数:11
相关论文
共 61 条
[1]  
Rusu AA, 2016, Arxiv, DOI [arXiv:1606.04671, 10.48550/arXiv.1606.04671, DOI 10.48550/ARXIV.1606.04671, DOI 10.43550/ARXIV:1606.04671]
[2]   Memory Aware Synapses: Learning What (not) to Forget [J].
Aljundi, Rahaf ;
Babiloni, Francesca ;
Elhoseiny, Mohamed ;
Rohrbach, Marcus ;
Tuytelaars, Tinne .
COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 :144-161
[3]  
Hung SCY, 2019, Arxiv, DOI arXiv:1910.06562
[4]   Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence [J].
Chaudhry, Arslan ;
Dokania, Puneet K. ;
Ajanthan, Thalaiyasingam ;
Torr, Philip H. S. .
COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 :556-572
[5]  
Du L, 2018, PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2086
[6]  
Fernando C., 2017, arXiv
[7]   Catastrophic forgetting in connectionist networks [J].
French, RM .
TRENDS IN COGNITIVE SCIENCES, 1999, 3 (04) :128-135
[8]  
Gasteiger J., 2020, arXiv, DOI [10.48550/arXiv.2011.14115, DOI 10.48550/ARXIV.2011.14115]
[9]  
Goyal P., 2017, arXiv
[10]   node2vec: Scalable Feature Learning for Networks [J].
Grover, Aditya ;
Leskovec, Jure .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :855-864