Ising Models with Latent Conditional Gaussian Variables

被引:0
作者
Nussbaum, Frank [1 ]
Giesen, Joachim [1 ]
机构
[1] Friedrich Schiller Univ Jena, Inst Informat, Jena, Germany
来源
ALGORITHMIC LEARNING THEORY, VOL 98 | 2019年 / 98卷
关键词
Ising Models; Latent Variables; Sparse and Low-Rank Matrices; Maximum-Entropy Principle; High-Dimensional Consistency; SELECTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Ising models describe the joint probability distribution of a vector of binary feature variables. Typically, not all the variables interact with each other and one is interested in learning the presumably sparse network structure of the interacting variables. However, in the presence of latent variables, the conventional method of learning a sparse model might fail. This is because the latent variables induce indirect interactions of the observed variables. In the case of only a few latent conditional Gaussian variables these spurious interactions contribute an additional low-rank component to the interaction parameters of the observed Ising model. Therefore, we propose to learn a sparse + low-rank decomposition of the parameters of an Ising model using a convex regularized likelihood problem. We show that the same problem can be obtained as the dual of a maximum-entropy problem with a new type of relaxation, where the sample means collectively need to match the expected values only up to a given tolerance. The solution to the convex optimization problem has consistency properties in the high-dimensional setting, where the number of observed binary variables and the number of latent conditional Gaussian variables are allowed to grow with the number of training samples.
引用
收藏
页数:13
相关论文
共 22 条
  • [1] [Anonymous], 2012, C NEUR INF PROC SYST
  • [2] [Anonymous], 2011, P INT C ART INT STAT
  • [3] Candés EJ, 2012, ANN STAT, V40, P1997, DOI 10.1214/12-AOS1001
  • [4] LATENT VARIABLE GRAPHICAL MODEL SELECTION VIA CONVEX OPTIMIZATION
    Chandrasekaran, Venkat
    Parrilo, Pablo A.
    Willsky, Alan S.
    [J]. ANNALS OF STATISTICS, 2012, 40 (04) : 1935 - 1967
  • [5] RANK-SPARSITY INCOHERENCE FOR MATRIX DECOMPOSITION
    Chandrasekaran, Venkat
    Sanghavi, Sujay
    Parrilo, Pablo A.
    Willsky, Alan S.
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2011, 21 (02) : 572 - 596
  • [6] Chen YX, 2016, Arxiv, DOI arXiv:1606.08925
  • [7] Robust Measurement via A Fused Latent and Graphical Item Response Theory Model
    Chen, Yunxiao
    Li, Xiaoou
    Liu, Jingchen
    Ying, Zhiliang
    [J]. PSYCHOMETRIKA, 2018, 83 (03) : 538 - 562
  • [8] High-Dimensional Mixed Graphical Models
    Cheng, Jie
    Li, Tianxi
    Levina, Elizaveta
    Zhu, Ji
    [J]. JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2017, 26 (02) : 367 - 378
  • [9] Performance guarantees for regularized maximum entropy density estimation
    Dudík, M
    Phillips, SJ
    Schapire, RE
    [J]. LEARNING THEORY, PROCEEDINGS, 2004, 3120 : 472 - 486
  • [10] Maximum entropy distribution estimation with generalized regularization
    Dudik, Miroslav
    Schapire, Robert E.
    [J]. LEARNING THEORY, PROCEEDINGS, 2006, 4005 : 123 - 138