Clustering Enhanced Multiplex Graph Contrastive Representation Learning

被引:4
作者
Yuan, Ruiwen [1 ,2 ]
Tang, Yongqiang [2 ]
Wu, Yajing [2 ]
Zhang, Wensheng [2 ,3 ]
机构
[1] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 101408, Peoples R China
[2] Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence, Beijing 100190, Peoples R China
[3] Guangzhou Univ, Sch Comp Sci & Cyber Engn, Guangzhou 510006, Peoples R China
基金
中国国家自然科学基金;
关键词
Multiplexing; Representation learning; Semantics; Feature extraction; Correlation; Training; Task analysis; Contrastive learning; graph representation learning; multiplex graph; multiview graph clustering;
D O I
10.1109/TNNLS.2023.3334751
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multiplex graph representation learning has attracted considerable attention due to its powerful capacity to depict multiple relation types between nodes. Previous methods generally learn representations of each relation-based subgraph and then aggregate them into final representations. Despite the enormous success, they commonly encounter two challenges: 1) the latent community structure is overlooked and 2) consistent and complementary information across relation types remains largely unexplored. To address these issues, we propose a clustering-enhanced multiplex graph contrastive representation learning model (CEMR). In CEMR, by formulating each relation type as a view, we propose a multiview graph clustering framework to discover the potential community structure, which promotes representations to incorporate global semantic correlations. Moreover, under the proposed multiview clustering framework, we develop cross-view contrastive learning and cross-view cosupervision modules to explore consistent and complementary information in different views, respectively. Specifically, the cross-view contrastive learning module equipped with a novel negative pairs selecting mechanism enables the view-specific representations to extract common knowledge across views. The cross-view cosupervision module exploits the high-confidence complementary information in one view to guide low-confidence clustering in other views by contrastive learning. Comprehensive experiments on four datasets confirm the superiority of our CEMR when compared to the state-of-the-art rivals.
引用
收藏
页码:1341 / 1355
页数:15
相关论文
共 61 条
[1]  
[Anonymous], 2013, arXiv
[2]   Multi-aspect self-supervised learning for heterogeneous information network [J].
Che, Feihu ;
Tao, Jianhua ;
Yang, Guohua ;
Liu, Tong ;
Zhang, Dawei .
KNOWLEDGE-BASED SYSTEMS, 2021, 233
[3]   Semisupervised Progressive Representation Learning for Deep Multiview Clustering [J].
Chen, Rui ;
Tang, Yongqiang ;
Xie, Yuan ;
Feng, Wenlong ;
Zhang, Wensheng .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (10) :14341-14355
[4]  
Chen Ting, 2019, INT C MACH LEARN
[5]  
Cheng JF, 2020, PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2973
[6]   The Comparative Toxicogenomics Database: update 2019 [J].
Davis, Allan Peter ;
Grondin, Cynthia J. ;
Johnson, Robin J. ;
Sciaky, Daniela ;
McMorran, Roy ;
Wiegers, Jolene ;
Wiegers, Thomas C. ;
Mattingly, Carolyn J. .
NUCLEIC ACIDS RESEARCH, 2019, 47 (D1) :D948-D954
[7]   Iterative Deep Structural Graph Contrast Clustering for Multiview Raw Data [J].
Dong, Zhibin ;
Jin, Jiaqi ;
Xiao, Yuyang ;
Wang, Siwei ;
Zhu, Xinzhong ;
Liu, Xinwang ;
Zhu, En .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (12) :18272-18284
[8]   One2Multi Graph Autoencoder for Multi-view Graph Clustering [J].
Fan, Shaohua ;
Wang, Xiao ;
Shi, Chuan ;
Lu, Emiao ;
Lin, Ken ;
Wang, Bai .
WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, :3070-3076
[9]  
Gu Fangda, 2020, ADV NEUR IN, V33
[10]   Dual adaptive learning multi-task multi-view for graph network representation learning [J].
Han, Beibei ;
Wei, Yingmei ;
Wang, Qingyong ;
Wan, Shanshan .
NEURAL NETWORKS, 2023, 162 :297-308