Collaborative graph neural networks for augmented graphs: A local-to-global perspective

被引:9
作者
Guo, Qihang [1 ]
Yang, Xibei [1 ,2 ]
Li, Ming [3 ,4 ]
Qian, Yuhua [5 ]
机构
[1] Jiangsu Univ Sci & Technol, Sch Econ & Management, Zhenjiang 212100, Jiangsu, Peoples R China
[2] Jiangsu Univ Sci & Technol, Sch Comp, Zhenjiang 212100, Jiangsu, Peoples R China
[3] Zhejiang Inst Optoelect, Jinhua 321004, Zhejiang, Peoples R China
[4] Zhejiang Normal Univ, Zhejiang Key Lab Intelligent Educ Technol & Applic, Jinhua 321004, Zhejiang, Peoples R China
[5] Shanxi Univ, Inst Big Data Sci & Ind, Taiyuan 030006, Shanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph neural networks; Multi-perspective learning; Embedding fusion; Complementary learning; CONSTRUCTION;
D O I
10.1016/j.patcog.2024.111020
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the field of graph neural networks (GNNs) for representation learning, a noteworthy highlight is the potential of embedding fusion architectures for augmented graphs. However, prevalent GNN embedding fusion architectures mainly focus on handling graph combinations from a global perspective, often ignoring their collaboration with the information of local graph combinations. This inherent limitation constrains the ability of the constructed models to handle multiple input graphs, particularly when dealing with noisy input graphs collected from error-prone sources or those resulting from deficiencies in graph augmentation methods. In this paper, we propose an effective and robust embedding fusion architecture from a local-to-global perspective termed collaborative graph neural networks for augmented graphs (LoGo-GNN). LoGo-GNN). Essentially, LoGo-GNN leverages a pairwise graph combination scheme to generate local perspective inputs. Together with the global graph combination, this serves as the basis to generate a local-to-global perspective. Specifically, LoGo-GNN employs a perturbation augmentation strategy to generate multiple augmentation graphs, thereby facilitating collaboration and embedding fusion from a local-to-global perspective through the use of graph combinations. In addition, LoGo-GNN incorporates a novel loss function for learning complementary information between different perspectives. We also conduct theoretical analysis to assess its expressive power under ideal conditions, demonstrating the effectiveness of LoGo-GNN. Our experiments, focusing on node classification and clustering tasks, highlight the superior performance of LoGo-GNN compared to state-of-the-art methods. Additionally, robustness analysis further confirms its effectiveness in addressing uncertainty challenges.
引用
收藏
页数:14
相关论文
共 40 条
[11]   Deepwalk-aware graph convolutional networks [J].
Jin, Taisong ;
Dai, Huaqiang ;
Cao, Liujuan ;
Zhang, Baochang ;
Huang, Feiyue ;
Gao, Yue ;
Ji, Rongrong .
SCIENCE CHINA-INFORMATION SCIENCES, 2022, 65 (05)
[12]  
Khan MR, 2019, AAAI CONF ARTIF INTE, P606
[13]  
Kipf T. N., 2017, P INT C LEARN REPR, DOI DOI 10.48550/ARXIV.1609.02907
[14]   BLoG: Bootstrapped graph representation learning with local and global regularization for recommendation [J].
Li, Ming ;
Zhang, Lin ;
Cui, Lixin ;
Bai, Lu ;
Li, Zhao ;
Wu, Xindong .
PATTERN RECOGNITION, 2023, 144
[15]   Multi-relational graph attention networks for knowledge graph completion [J].
Li, Zhifei ;
Zhao, Yue ;
Zhang, Yan ;
Zhang, Zhaoli .
KNOWLEDGE-BASED SYSTEMS, 2022, 251
[16]   Graph contrastive learning with implicit augmentations [J].
Liang, Huidong ;
Du, Xingjian ;
Zhu, Bilei ;
Ma, Zejun ;
Chen, Ke ;
Gao, Junbin .
NEURAL NETWORKS, 2023, 163 :156-164
[17]  
Liu ST, 2022, PR MACH LEARN RES
[18]  
Liu YB, 2023, PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, P2215
[19]   Graph Self-Supervised Learning: A Survey [J].
Liu, Yixin ;
Jin, Ming ;
Pan, Shirui ;
Zhou, Chuan ;
Zheng, Yu ;
Xia, Feng ;
Yu, Philip S. .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (06) :5879-5900
[20]   An integral representation of functions using three-layered networks and their approximation bounds [J].
Murata, N .
NEURAL NETWORKS, 1996, 9 (06) :947-956