Graph Information Vanishing Phenomenon in Implicit Graph Neural Networks

被引:0
|
作者
He, Silu [1 ]
Cao, Jun [1 ]
Yuan, Hongyuan [1 ]
Chen, Zhe [1 ]
Gao, Shijuan [1 ,2 ]
Li, Haifeng [1 ]
机构
[1] Cent South Univ, Sch Geosci & Info Phys, Changsha 410083, Peoples R China
[2] Cent South Univ, Informat & Network Ctr, Changsha 410083, Peoples R China
基金
中国国家自然科学基金;
关键词
graph neural network; graph information; joint training; graph curvature; 68-XX; CONVOLUTIONAL NETWORKS; RICCI CURVATURE;
D O I
10.3390/math12172659
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Graph neural networks (GNNs) have been highly successful in graph representation learning. The goal of GNNs is to enrich node representations by aggregating information from neighboring nodes. Much work has attempted to improve the quality of aggregation by introducing a variety of graph information with representational capabilities. The class of GNNs that improves the quality of aggregation by encoding graph information with representational capabilities into the weights of neighboring nodes through different learnable transformation structures (LTSs) are referred to as implicit GNNs. However, we argue that LTSs only transform graph information into the weights of neighboring nodes in the direction that minimizes the loss function during the learning process and does not actually utilize the effective properties of graph information, a phenomenon that we refer to as graph information vanishing (GIV). To validate this point, we perform thousands of experiments on seven node classification benchmark datasets. We first replace the graph information utilized by five implicit GNNs with random values and surprisingly observe that the variation range of accuracies is less than +/- 0.3%. Then, we quantitatively characterize the similarity of the weights generated from graph information and random values by cosine similarity, and the cosine similarities are greater than 0.99. The empirical experiments show that graph information is equivalent to initializing the input of LTSs. We believe that graph information as an additional supervised signal to constrain the training of GNNs can effectively solve GIV. Here, we propose GinfoNN, which utilizes both labels and discrete graph curvature as supervised signals to jointly constrain the training of the model. The experimental results show that the classification accuracies of GinfoNN improve by two percentage points over baselines on large and dense datasets.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Pre-Train and Learn: Preserving Global Information for Graph Neural Networks
    Dan-Hao Zhu
    Xin-Yu Dai
    Jia-Jun Chen
    Journal of Computer Science and Technology, 2021, 36 : 1420 - 1430
  • [42] Causal Subgraphs and Information Bottlenecks: Redefining OOD Robustness in Graph Neural Networks
    An, Weizhi
    Zhong, Wenliang
    Jiang, Feng
    Mo, Hehuan
    Huang, Junzhou
    COMPUTER VISION - ECCV 2024, PT LXXXVIII, 2025, 15146 : 473 - 489
  • [43] Relation Prediction via Graph Neural Network in Heterogeneous Information Networks with Missing Type Information
    Zhang, Han
    Hao, Yu
    Cao, Xin
    Fang, Yixiang
    Shin, Won-Yong
    Wang, Wei
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, : 2517 - 2526
  • [44] An Overview on the Application of Graph Neural Networks in Wireless Networks
    He, Shiwen
    Xiong, Shaowen
    Ou, Yeyu
    Zhang, Jian
    Wang, Jiaheng
    Huang, Yongming
    Zhang, Yaoxue
    IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY, 2021, 2 : 2547 - 2565
  • [45] Differentially private graph neural networks for graph classification and its adaptive optimization
    Li, Yong
    Song, Xiao
    Gong, Kaiqi
    Liu, Songsong
    Li, Wenxin
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 263
  • [46] Influence maximization in social networks using graph embedding and graph neural network
    Kumar, Sanjay
    Mallik, Abhishek
    Khetarpal, Anavi
    Panda, B. S.
    INFORMATION SCIENCES, 2022, 607 : 1617 - 1636
  • [47] Revisiting graph neural networks from hybrid regularized graph signal reconstruction
    Miao, Jiaxing
    Cao, Feilong
    Ye, Hailiang
    Li, Ming
    Yang, Bing
    NEURAL NETWORKS, 2023, 157 : 444 - 459
  • [48] Query execution time estimation in graph databases based on graph neural networks
    He, Zhenzhen
    Yu, Jiong
    Gu, Tiquan
    Yang, Dexian
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2024, 36 (04)
  • [49] Efficient graph representation in graph neural networks for stress predictions in stiffened panels
    Cai, Yuecheng
    Jelovica, Jasmin
    THIN-WALLED STRUCTURES, 2024, 203
  • [50] Implicit sentiment analysis based on graph attention neural network
    Yang, Shanliang
    Xing, Linlin
    Li, Yongming
    Chang, Zheng
    ENGINEERING REPORTS, 2022, 4 (01)