Refining Euclidean Obfuscatory Nodes Helps: A Joint-Space Graph Learning Method for Graph Neural Networks

被引:0
作者
Liu, Zhaogeng [1 ,2 ]
Ji, Feng [3 ]
Yang, Jielong [1 ,2 ]
Cao, Xiaofeng [1 ,2 ]
Zhang, Muhan [4 ]
Chen, Hechang [1 ,2 ]
Chang, Yi [1 ,2 ,5 ]
机构
[1] Jilin Univ, Sch Artificial Intelligence, Changchun 130012, Peoples R China
[2] Minist Educ, Engn Res Ctr Knowledge Driven Human Machine Intell, Changchun 130012, Peoples R China
[3] Nanyang Technol Univ, Sch Elect & Elect Engn, Nanyang 642197, Singapore
[4] Peking Univ, Inst Artificial Intelligence, Beijing 100871, Peoples R China
[5] Jilin Univ, Int Ctr Future Sci, Changchun 130012, Peoples R China
基金
国家重点研发计划;
关键词
Topology; Refining; Distortion; Geometry; Training; Research and development; Network topology; Graph learning; graph neural networks (GNNs); joint-space; obfuscatory nodes;
D O I
10.1109/TNNLS.2024.3405898
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many graph neural networks (GNNs) are inapplicable when the graph structure representing the node relations is unavailable. Recent studies have shown that this problem can be effectively solved by jointly learning the graph structure and the parameters of GNNs. However, most of these methods learn graphs by using either a Euclidean or hyperbolic metric, which means that the space curvature is assumed to be either constant zero or constant negative. Graph embedding spaces usually have nonconstant curvatures, and thus, such an assumption may produce some obfuscatory nodes, which are improperly embedded and close to multiple categories. In this article, we propose a joint-space graph learning (JSGL) method for GNNs. JSGL learns a graph based on Euclidean embeddings and identifies Euclidean obfuscatory nodes. Then, the graph topology near the identified obfuscatory nodes is refined in hyperbolic space. We also present a theoretical justification of our method for identifying obfuscatory nodes and conduct a series of experiments to test the performance of JSGL. The results show that JSGL outperforms many baseline methods. To obtain more insights, we analyze potential reasons for this superior performance.
引用
收藏
页码:11720 / 11733
页数:14
相关论文
共 67 条
[1]  
Bachmann G, 2020, PR MACH LEARN RES, V119
[2]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399
[3]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[4]   Geometric Deep Learning Going beyond Euclidean data [J].
Bronstein, Michael M. ;
Bruna, Joan ;
LeCun, Yann ;
Szlam, Arthur ;
Vandergheynst, Pierre .
IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (04) :18-42
[5]  
Chami I, 2019, ADV NEUR IN, V32
[6]   XGBoost: A Scalable Tree Boosting System [J].
Chen, Tianqi ;
Guestrin, Carlos .
KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, :785-794
[7]  
Chen Y., 2020, Advances in Neural Information Processing Systems, V33, P18194
[8]  
Dai Hanjun, 2019, Advances in Neural Information Processing Systems, V32
[9]  
Defferrard M., 2017, Proceedings of the 18th international society for music information retrieval conference, ISMIR 2017, suzhou, china, october 23-27, 2017, P316
[10]   ICNCS: Internal Cascaded Neuromorphic Computing System for Fast Electric Vehicle State-of-Charge Estimation [J].
Dong, Zhekang ;
Ji, Xiaoyue ;
Wang, Jiayang ;
Gu, Yeting ;
Wang, Junfan ;
Qi, Donglian .
IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) :4311-4320