Dynamic Representation Learning via Recurrent Graph Neural Networks

被引:11
|
作者
Zhang, Chun-Yang [1 ]
Yao, Zhi-Liang [1 ]
Yao, Hong-Yu [1 ]
Huang, Feng [2 ]
Chen, C. L. Philip [3 ]
机构
[1] Fuzhou Univ, Sch Comp & Data Sci, Fuzhou 350025, Peoples R China
[2] Fuzhou Univ, Sch Mech Engn & Automat, Fuzhou 350025, Peoples R China
[3] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510006, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Representation learning; Recurrent neural networks; Matrix decomposition; Feature extraction; Computational modeling; Biological system modeling; Task analysis; Dynamic graphs; graph neural networks (GNNs); graph representation learning (GRL); node embeddings; recurrent neural network (RNN); COMMUNITY;
D O I
10.1109/TSMC.2022.3196506
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A large number of real-world systems generate graphs that are structured data aligned with nodes and edges. Graphs are usually dynamic in many scenarios, where nodes or edges keep evolving over time. Recently, graph representation learning (GRL) has received great success in network analysis, which aims to produce informative and representative features or low-dimensional embeddings by exploring node attributes and network topology. Most state-of-the-art models for dynamic GRL are composed of a static representation learning model and a recurrent neural network (RNN). The former generates the representations of a graph or nodes from one static graph at a discrete time step, while the latter captures the temporal correlation between adjacent graphs. However, the two-stage design ignores the temporal dynamics between contiguous graphs during the learning processing of graph representations. To alleviate this problem, this article proposes a representation learning model for dynamic graphs, called DynGNN. Differently, it is a single-stage model that embeds an RNN into a graph neural network to produce better representations in a compact form. This takes the fusion of temporal and topology correlations into account from low-level to high-level feature learning, enabling the model to capture more fine-grained evolving patterns. From the experimental results on both synthetic and real-world networks, the proposed DynGNN yields significant improvements in multiple tasks compared to the state-of-the-art counterparts.
引用
收藏
页码:1284 / 1297
页数:14
相关论文
共 50 条
  • [21] Graph representation learning via simple jumping knowledge networks
    Yang, Fei
    Zhang, Huyin
    Tao, Shiming
    Hao, Sheng
    APPLIED INTELLIGENCE, 2022, 52 (10) : 11324 - 11342
  • [22] Graph representation learning via simple jumping knowledge networks
    Fei Yang
    Huyin Zhang
    Shiming Tao
    Sheng Hao
    Applied Intelligence, 2022, 52 : 11324 - 11342
  • [23] Stable dynamic backpropagation learning in recurrent neural networks
    Jin, LA
    Gupta, MM
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1999, 10 (06): : 1321 - 1334
  • [24] Reinforcement Learning via Recurrent Convolutional Neural Networks
    Shankar, Tanmay
    Dwivedy, Santosha K.
    Guha, Prithwijit
    2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 2592 - 2597
  • [25] FL-GNNs: Robust Network Representation via Feature Learning Guided Graph Neural Networks
    Wang, Beibei
    Jiang, Bo
    Ding, Chris
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2024, 11 (01): : 750 - 760
  • [26] Don Tucker finalist: predicting the combinatorial code of olfaction via graph neural networks and representation learning
    Hladis, Matej
    Fiorucci, Sebastien
    Topin, Jeremie
    CHEMICAL SENSES, 2022, 47
  • [27] SiGNN: A spike-induced graph neural network for dynamic graph representation learning
    Chen, Dong
    Zheng, Shuai
    Xu, Muhao
    Zhu, Zhenfeng
    Zhao, Yao
    PATTERN RECOGNITION, 2025, 158
  • [28] ReiPool: Reinforced Pooling Graph Neural Networks for Graph-Level Representation Learning
    Luo, Xuexiong
    Zhang, Sheng
    Wu, Jia
    Chen, Hongyang
    Peng, Hao
    Zhou, Chuan
    Li, Zhao
    Xue, Shan
    Yang, Jian
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (12) : 9109 - 9122
  • [29] Recurrent neural networks for reinforcement learning: Architecture, learning algorithms and internal representation
    Onat, A
    Kita, H
    Nishikawa, Y
    IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE, 1998, : 2010 - 2015
  • [30] Learning graph representation with Randomized Neural Network for dynamic texture classification
    Ribas, Lucas C.
    de Mesquita Sa Junior, Jarbas Joaci
    Manzanera, Antoine
    Bruno, Odemir M.
    APPLIED SOFT COMPUTING, 2022, 114