Learning Topic Models by Belief Propagation

被引:34
|
作者
Zeng, Jia [1 ]
Cheung, William K. [2 ]
Liu, Jiming [2 ]
机构
[1] Soochow Univ, Sch Comp Sci & Technol, Suzhou 215006, Peoples R China
[2] Hong Kong Baptist Univ, Dept Comp Sci, Kowloon Tong, Hong Kong, Peoples R China
关键词
Latent Dirichlet allocation; topic models; belief propagation; message passing; factor graph; Bayesian networks; Markov random fields; hierarchical Bayesian models; Gibbs sampling; variational Bayes; EM;
D O I
10.1109/TPAMI.2012.185
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Latent Dirichlet allocation (LDA) is an important hierarchical Bayesian model for probabilistic topic modeling, which attracts worldwide interest and touches on many important applications in text mining, computer vision and computational biology. This paper represents the collapsed LDA as a factor graph, which enables the classic loopy belief propagation (BP) algorithm for approximate inference and parameter estimation. Although two commonly used approximate inference methods, such as variational Bayes (VB) and collapsed Gibbs sampling (GS), have gained great success in learning LDA, the proposed BP is competitive in both speed and accuracy, as validated by encouraging experimental results on four large-scale document datasets. Furthermore, the BP algorithm has the potential to become a generic scheme for learning variants of LDA-based topic models in the collapsed space. To this end, we show how to learn two typical variants of LDA-based topic models, such as author-topic models (ATM) and relational topic models (RTM), using BP based on the factor graph representations.
引用
收藏
页码:1121 / 1134
页数:14
相关论文
共 50 条
  • [1] A Topic Modeling Toolbox Using Belief Propagation
    Zeng, Jia
    JOURNAL OF MACHINE LEARNING RESEARCH, 2012, 13 : 2233 - 2236
  • [2] Topic tracking based on online belief propagation
    Gong, Sheng-Rong
    Ye, Yun
    Liu, Chun-Ping
    Ji, Yi
    Jisuanji Xuebao/Chinese Journal of Computers, 2015, 38 (02): : 249 - 260
  • [3] Quantum graphical models and belief propagation
    Leifer, M. S.
    Poulin, D.
    ANNALS OF PHYSICS, 2008, 323 (08) : 1899 - 1946
  • [4] Topic model with incremental vocabulary based on Belief Propagation
    Wang, Meng
    Yang, Lu
    Yan, JianFeng
    Zhang, Jianwei
    Zhou, Jie
    Xia, Peng
    KNOWLEDGE-BASED SYSTEMS, 2019, 182
  • [5] On Provenance in Topic Models
    Sharma, Misha
    Choi, Arthur
    PROCEEDINGS OF THE 2024 ACM SOUTHEAST CONFERENCE, ACMSE 2024, 2024, : 302 - 307
  • [6] Adversarial Learning for Topic Models
    Masada, Tomonari
    Takasu, Atsuhiro
    ADVANCED DATA MINING AND APPLICATIONS, ADMA 2018, 2018, 11323 : 292 - 302
  • [7] Contrastive Divergence Learning with Chained Belief Propagation
    Ding, Fan
    Xue, Yexiang
    INTERNATIONAL CONFERENCE ON PROBABILISTIC GRAPHICAL MODELS, VOL 138, 2020, 138 : 161 - 172
  • [8] SaberLDA: Sparsity-Aware Learning of Topic Models on GPUs
    Li, Kaiwei
    Chen, Jianfei
    Chen, Wenguang
    Zhu, Jun
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2020, 31 (09) : 2112 - 2124
  • [9] Learning Author-Topic Models from Text Corpora
    Rosen-Zvi, Michal
    Chemudugunta, Chaitanya
    Griffiths, Thomas
    Smyth, Padhraic
    Steyvers, Mark
    ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2010, 28 (01)
  • [10] TSBP: Tangent Space Belief Propagation for Manifold Learning
    Cohn, Thomas
    Jenkins, Odest Chadwicke
    Desingh, Karthik
    Zeng, Zhen
    IEEE ROBOTICS AND AUTOMATION LETTERS, 2020, 5 (04) : 6694 - 6701