Fast and Accurate Network Embeddings via Very Sparse Random Projection

被引:37
作者
Chen, Haochen [1 ]
Sultan, Syed Fahad [1 ]
Tian, Yingtao [1 ]
Chen, Muhao [2 ]
Skiena, Steven [1 ]
机构
[1] SUNY Stony Brook, Stony Brook, NY 11794 USA
[2] Univ Calif Los Angeles, Los Angeles, CA USA
来源
PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM '19) | 2019年
关键词
network embeddings; network representation learning; random projection;
D O I
10.1145/3357384.3357879
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
We present FastRP, a scalable and performant algorithm for learning distributed node representations in a graph. FastRP is over 4,000 times faster than state-of-the-art methods such as DeepWalk and node2vec, while achieving comparable or even better performance as evaluated on several real-world networks on various downstream tasks. We observe that most network embedding methods consist of two components: construct a node similarity matrix and then apply dimension reduction techniques to this matrix. We show that the success of these methods should be attributed to the proper construction of this similarity matrix, rather than the dimension reduction method employed. FastRP is proposed as a scalable algorithm for network embeddings. Two key features of FastRP are: 1) it explicitly constructs a node similarity matrix that captures transitive relationships in a graph and normalizes matrix entries based on node degrees; 2) it utilizes very sparse random projection, which is a scalable optimization-free method for dimension reduction. An extra benefit from combining these two design choices is that it allows the iterative computation of node embeddings so that the similarity matrix need not be explicitly constructed, which further speeds up FastRP. FastRP is also advantageous for its ease of implementation, parallelization and hyperparameter tuning. The source code is available at https://github.com/GTmac/FastRP.
引用
收藏
页码:399 / 408
页数:10
相关论文
共 39 条
  • [21] Levy O, 2014, ADV NEUR IN, V27
  • [22] Mikolov T., 2013, P 1 INT C LEARN REPR, DOI [DOI 10.48550/ARXIV.1301.3781, 10.48550/arXiv.1301.3781]
  • [23] Newell C., 2015, P 9 ACM C REC SYST, P163, DOI [DOI 10.1145/2792838.2800180, 10.1145/2792838.2800180]
  • [24] Perozzi B., 2014, PROC 20 ACM SIGKDD, P701, DOI DOI 10.1145/2623330.2623732
  • [25] Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec
    Qiu, Jiezhong
    Dong, Yuxiao
    Ma, Hao
    Li, Jian
    Wang, Kuansan
    Tang, Jie
    [J]. WSDM'18: PROCEEDINGS OF THE ELEVENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, 2018, : 459 - 467
  • [26] Qiu Jiezhong, 2019, NETSMF LARGE SCALE N
  • [27] Nonlinear dimensionality reduction by locally linear embedding
    Roweis, ST
    Saul, LK
    [J]. SCIENCE, 2000, 290 (5500) : 2323 - +
  • [28] TERM-WEIGHTING APPROACHES IN AUTOMATIC TEXT RETRIEVAL
    SALTON, G
    BUCKLEY, C
    [J]. INFORMATION PROCESSING & MANAGEMENT, 1988, 24 (05) : 513 - 523
  • [29] PTE: Predictive Text Embedding through Large-scale Heterogeneous Text Networks
    Tang, Jian
    Qu, Meng
    Mei, Qiaozhu
    [J]. KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, : 1165 - 1174
  • [30] LINE: Large-scale Information Network Embedding
    Tang, Jian
    Qu, Meng
    Wang, Mingzhe
    Zhang, Ming
    Yan, Jun
    Mei, Qiaozhu
    [J]. PROCEEDINGS OF THE 24TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB (WWW 2015), 2015, : 1067 - 1077