Graph Representation Learning Hamilton

被引:3
作者
Hamilton W.L. [1 ]
机构
[1] McGill University and Mila-Quebec Artificial Intelligence Institute, Canada
来源
Synthesis Lectures on Artificial Intelligence and Machine Learning | 2020年 / 14卷 / 03期
关键词
deep learning; geometric deep learning; graph convolutions; graph embeddings; graph neural networks; graph signal processing; knowledge graphs; network analysis; node embeddings; relational data; social networks; spectral graph theory;
D O I
10.2200/S01045ED1V01Y202009AIM046
中图分类号
学科分类号
摘要
Graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial for creating systems that can learn, reason, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning, including techniques for deep graph embeddings, generalizations of convolutional neural networks to graph-structured data, and neural message-passing approaches inspired by belief propagation. These advances in graph representation learning have led to new state-of-the-art results in numerous domains, including chemical synthesis, 3D vision, recommender systems, question answering, and social network analysis. This book provides a synthesis and overview of graph representation learning. It begins with a discussion of the goals of graph representation learning as well as key methodological foundations in graph theory and network analysis. Following this, the book introduces and reviews methods for learning node embeddings, including random-walk-based methods and applications to knowledge graphs. It then provides a technical synthesis and introduction to the highly successful graph neural network (GNN) formalism, which has become a dominant and fast-growing paradigm for deep learning with graph data. The book concludes with a synthesis of recent advancements in deep generative models for graphs a nascent but quickly growing subset of graph representation learning. © 2020 Morgan and Claypool Publishers. All rights reserved.
引用
收藏
页码:1 / 159
页数:158
相关论文
共 135 条
[1]  
Agrawal M., Zitnik M., Leskovec J., Et al., Large-scale analysis of disease pathways in the human interactome, Psb, pp. 111-122, (2018)
[2]  
Ahmed A., Shervashidze N., Narayanamurthy S., Josifovski V., Smola A.J., Distributed large-scale natural graph factorization, Www, (2013)
[3]  
Albert R., Barabasi L., Statistical mechanics of complex networks, Rev. Mod. Phys, 74, 1, (2002)
[4]  
Lei Ba J., Ryan Kiros J., Hinton G.E., Layer Normalization, (2016)
[5]  
Babai L., Kucera L., Canonical labelling of graphs in linear average time, Focs Ieee, (1979)
[6]  
Bahdanau D., Cho K., Bengio Y., Neural machine translation by jointly learning to align and translate, Iclr, (2015)
[7]  
Barcelo P., Kostylev E., Monet M., Perez J., Reutter J., Silva J., The logical expressiveness of graph neural networks, Iclr, (2020)
[8]  
Battaglia P., Et al., Relational Inductive Biases, Deep Learning, and Graph Networks, 2018, (1806)
[9]  
Belkin M., Niyogi P., Laplacian eigenmaps and spectral techniques for embedding and clustering, NeurIPS, (2002)
[10]  
Bianchi F., Grattarola D., Alippi C., Livi L., Graph neural networks with convolutional ARMA filters, ArXiv Preprint ArXiv, 2019, (1901)