Learning Laplacian Matrix in Smooth Graph Signal Representations

被引:442
作者
Dong, Xiaowen [1 ]
Thanou, Dorina [2 ]
Frossard, Pascal [2 ]
Vandergheynst, Pierre [2 ]
机构
[1] MIT, Media Lab, Cambridge, MA 02139 USA
[2] Ecole Polytech Fed Lausanne, Signal Proc Labs, LTS4, LTS2, CH-1015 Lausanne, Switzerland
基金
瑞士国家科学基金会;
关键词
Laplacian matrix learning; graph signal processing; representation theory; factor analysis; Gaussian prior; VARIABLE SELECTION; BRAIN CONNECTIVITY; MODEL; WAVELET;
D O I
10.1109/TSP.2016.2602809
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The construction of a meaningful graph plays a crucial role in the success of many graph-based representations and algorithms for handling structured data, especially in the emerging field of graph signal processing. However, a meaningful graph is not always readily available from the data, nor easy to define depending on the application domain. In particular, it is often desirable in graph signal processing applications that a graph is chosen such that the data admit certain regularity or smoothness on the graph. In this paper, we address the problem of learning graph Laplacians, which is equivalent to learning graph topologies, such that the input data form graph signals with smooth variations on the resulting topology. To this end, we adopt a factor analysis model for the graph signals and impose a Gaussian probabilistic prior on the latent variables that control these signals. We show that the Gaussian prior leads to an efficient representation that favors the smoothness property of the graph signals. We then propose an algorithm for learning graphs that enforces such property and is based on minimizing the variations of the signals on the learned graph. Experiments on both synthetic and real world data demonstrate that the proposed graph learning framework can efficiently infer meaningful graph topologies from signal observations under the smoothness prior.
引用
收藏
页码:6160 / 6173
页数:14
相关论文
共 62 条
[1]   A Spectral Graph Uncertainty Principle [J].
Agaskar, Ameya ;
Lu, Yue M. .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2013, 59 (07) :4338-4356
[2]  
[Anonymous], 1994, Statistical Factor Analysis and Related Methods, Theory and Applications
[3]  
[Anonymous], CNSTR9702 CALTECH
[4]  
[Anonymous], 2008, Introduction to information retrieval
[5]  
[Anonymous], 2011, Adv Neural Inf Process Syst
[6]  
[Anonymous], 2005, Matrix Algebra
[7]  
[Anonymous], 2009, PROC 26 ANN INT C MA, DOI DOI 10.1145/1553374.1553400
[8]  
Argyriou A., 2005, Advances in Neural Information Processing Systems, P67
[9]  
Banerjee O, 2008, J MACH LEARN RES, V9, P485
[10]   Emergence of scaling in random networks [J].
Barabási, AL ;
Albert, R .
SCIENCE, 1999, 286 (5439) :509-512