Attraction and Repulsion: Unsupervised Domain Adaptive Graph Contrastive Learning Network

被引:20
作者
Wu, Man [1 ]
Pan, Shirui [2 ]
Zhu, Xingquan [1 ]
机构
[1] Florida Atlantic Univ, Dept Elect Engn & Comp Sci, Boca Raton, FL 33431 USA
[2] Monash Univ, Melbourne, Vic 3800, Australia
来源
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE | 2022年 / 6卷 / 05期
基金
美国国家科学基金会; 澳大利亚研究理事会;
关键词
Adaptation models; Force; Adaptive systems; Representation learning; Mutual information; Knowledge engineering; Task analysis; Domain adaptive learning; graph contrastive learning; graph neural network; node classification;
D O I
10.1109/TETCI.2022.3156044
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Graph convolutional networks (GCNs) are important techniques for analytics tasks related to graph data. To date, most GCNs are designed for a single graph domain. They are incapable of transferring knowledge from/to different domains (graphs), due to the limitation in graph representation learning and domain adaptation across graph domains. This paper proposes a novel Graph Contrastive Learning Network (GCLN) for unsupervised domain adaptive graph learning. The key innovation is to enforce attraction and repulsion forces within each single graph domain, and across two graph domains. Within each graph, an attraction force encourages local patch node features to be similar to global representation of the entire graph, whereas a repulsion force will repel node features so they can separate network from its permutations (i.e. domain-specific graph contrastive learning). Across two graph domains, an attraction force encourages node features from two domains to be largely consistent, whereas a repulsion force ensures features are discriminative to differentiate graph domains (i.e. cross-domain graph contrastive learning). The within- and cross-domain graph contrastive learning is carried out by optimizing an objective function, which combines source classifier and target classifier loss, domain-specific contrastive loss, and cross-domain contrastive loss. As a result, feature learning from graphs is facilitated using knowledge transferred between graphs. Experiments on real-world datasets demonstrate that GCLN outperforms state-of-the-art graph neural network algorithms.
引用
收藏
页码:1079 / 1091
页数:13
相关论文
共 39 条
[1]  
Belghazi MI, 2018, PR MACH LEARN RES, V80
[2]  
Das D, 2018, IEEE IMAGE PROC, P3758, DOI 10.1109/ICIP.2018.8451152
[3]  
Du L, 2018, PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2086
[4]  
Ganin Y, 2016, J MACH LEARN RES, V17
[5]  
Ganin Y, 2015, PR MACH LEARN RES, V37, P1180
[6]  
Hamilton WL, 2017, ADV NEUR IN, V30
[7]  
Hjelm R. D., 2019, PROC INT C LEARN REP, P1, DOI [DOI 10.48550/ARXIV.1808.06670, 10.48550/arXiv.1808.06670]
[8]   A Survey on Contrastive Self-Supervised Learning [J].
Jaiswal, Ashish ;
Babu, Ashwin Ramesh ;
Zadeh, Mohammad Zaki ;
Banerjee, Debapriya ;
Makedon, Fillia .
TECHNOLOGIES, 2021, 9 (01)
[9]  
Jiang W, 2008, IEEE IMAGE PROC, P161, DOI 10.1109/ICIP.2008.4711716
[10]  
Jin M, 2021, PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, P1477