Cascade Prediction model based on Dynamic Graph Representation and Self-Attention

被引:0
作者
Zhang F. [1 ]
Wang X. [1 ]
Wang R. [1 ]
Tang Q. [1 ]
Han Y. [1 ]
机构
[1] School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu
来源
Dianzi Keji Daxue Xuebao/Journal of the University of Electronic Science and Technology of China | 2022年 / 51卷 / 01期
关键词
Cascades prediction; Deep learning; Dynamic graph representation; Information diffusion; Self-attention mechanism;
D O I
10.12178/1001-0548.2021100
中图分类号
学科分类号
摘要
The traditional cascade prediction models do not consider the dynamics features in the process of information diffusion and rely on artificial marking features heavily, which have the problems of poor generalization and low prediction accuracy. This paper proposes a cascade prediction model (information cascade with dynamic graphs representation and self-attention, DySatCas) that combines dynamic graph representation and self-attention mechanism. The model adopts an end-to-end approach, avoiding the difficult problem of cascade graph representation caused by artificial labeling features, capturing the dynamic evolution process of cascade graphs through sub-graph sampling. And it introduces a self-attention mechanism to better integrate in the dynamic structure changes and temporal characteristics of the information cascade graph learned in the observation window, which can allocate weight values to the network reasonably, reduce the loss of information, and improve the prediction performance. Experimental results show that DySatCas has significantly improved prediction accuracy compared with the existing baseline prediction model. Copyright ©2022 Journal of University of Electronic Science and Technology of China. All rights reserved.
引用
收藏
页码:83 / 90
页数:7
相关论文
共 19 条
  • [11] LI C, MA J, GUO X, Et al., Deepcas: An end-to-end predictor of information cascades, Proceedings of the 26th international conference on World Wide Web, pp. 577-586, (2017)
  • [12] PEROZZI B, AL-RFOU R, SKIENA S., Deepwalk: Online learning of social representations, Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 701-710, (2014)
  • [13] PENNINGTON J, SOCHER R, MANNING C D., Glove: Global vectors for word representation, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532-1543, (2014)
  • [14] SANKAR A, WU Y, GOU L, Et al., Dysat: Deep neural representation learning on dynamic graphs via self-attention networks, Proceedings of the 13th ACM International Conference on Web Search and Data Mining (WSDM), pp. 519-527, (2020)
  • [15] XUE H, YANG L, JIANG W, Et al., Modeling dynamic heterogeneous network for link prediction using hierarchical attention with temporal RNN
  • [16] GOYAL P, CHHETRI S R, CANEDO A M., Capturing network dynamics using dynamic graph representation learning
  • [17] ZHOU F, XU X, ZHANG K, Et al., Variational information diffusion for probabilistic cascades prediction, IEEE INFOCOM 2020-IEEE Conference on Computer Communications, pp. 1618-1627, (2020)
  • [18] GEHRING J, AULI M, GRANGIER D, Et al., Convolutional sequence to sequence learning, International Conference on Machine Learning, pp. 1243-1252, (2017)
  • [19] WANG J, ZHENG V W, LIU Z, Et al., Topological recurrent neural network for diffusion prediction, 2017 IEEE International Conference on Data Mining (ICDM), pp. 475-484, (2017)