A Hierarchical Model Based on Latent Dirichlet Allocation for Action Recognition

被引:22
作者
Yang, Shuang [1 ]
Yuan, Chunfeng [1 ]
Hu, Weiming [1 ]
Ding, Xinmiao [2 ]
机构
[1] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing, Peoples R China
[2] Shandong Inst Business & Technol, Yantai, Peoples R China
来源
2014 22ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR) | 2014年
关键词
D O I
10.1109/ICPR.2014.451
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Inspired by the recent success of hierarchical representation, we propose a new hierarchical variant of latent Dirichlet allocation (h-LDA) for action recognition. The model consists of an appearance group and a motion group, and we introduce a new hierarchical structure including two-layer topics in each group to learn the spatial temporal patterns (STPs) of human actions. The basic idea is that the two-layer topics are used to model the global STPs and the local STPs of the actions respectively. Two groups of discrete words are generated from two complementary kinds of features for each group. Each topic learned in these two groups is used to describe a particular aspect of the actions. Specifically, the mid-level topics are learned to describe the local STPs by including the geometric structure information in the lower-level words. The top-level topics are learned from the mid-level topics and are the mixture distribution of the local STPs, which makes the top-level topics appropriate to represent the global STPs. In addition, we give the learning and inference process by Gibbs sampling with reasonable assumptions. Finally, each sample is discriminatively represented as the probabilistic distribution over the global STPs learned by the proposed h-LDA. Experimental results on two datasets demonstrate the effectiveness of our approach for action recognition.
引用
收藏
页码:2613 / 2618
页数:6
相关论文
共 50 条
  • [21] Latent Dirichlet Allocation Based Multilevel Classification
    Bhutada, Sunil
    Balaram, V. V. S. S. S.
    Bulusu, Vishnu Vardhan
    2014 INTERNATIONAL CONFERENCE ON CONTROL, INSTRUMENTATION, COMMUNICATION AND COMPUTATIONAL TECHNOLOGIES (ICCICCT), 2014, : 1020 - 1024
  • [22] A SPEECH EMOTION RECOGNITION FRAMEWORK BASED ON LATENT DIRICHLET ALLOCATION: ALGORITHM AND FPGA IMPLEMENTATION
    Shah, Mohit
    Miao, Lifeng
    Chakrabarti, Chaitali
    Spanias, Andreas
    2013 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2013, : 2553 - 2557
  • [23] Robust Action Recognition Based on a Hierarchical Model
    Jiang, Xinbo
    Zhong, Fan
    Peng, Qunsheng
    Qin, Xueying
    2013 INTERNATIONAL CONFERENCE ON CYBERWORLDS (CW), 2013, : 191 - 198
  • [24] A Network Delay Factor Model Based on the Hidden Markov Model and Latent Dirichlet Allocation
    Li, Guodong
    Yuchi, Jingyuan
    Yang, Hao
    Li, Kai
    IEEE ACCESS, 2019, 7 : 133136 - 133144
  • [25] On Privacy Protection of Latent Dirichlet Allocation Model Training
    Zhao, Fangyuan
    Ren, Xuebin
    Yang, Shusen
    Yang, Xinyu
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 4860 - 4866
  • [26] Topic Model Allocation of Conversational Dialogue Records by Latent Dirichlet Allocation
    Yeh, Jui-Feng
    Lee, Chen-Hsien
    Tan, Yi-Shiuan
    Yu, Liang-Chih
    2014 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA), 2014,
  • [27] Latent Dirichlet Allocation Model Training With Differential Privacy
    Zhao, Fangyuan
    Ren, Xuebin
    Yang, Shusen
    Han, Qing
    Zhao, Peng
    Yang, Xinyu
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 1290 - 1305
  • [28] Image cluster and retrieval with latent dirichlet allocation model
    Cao, Yudong
    Sun, Fuming
    Wang, Dongxia
    Zhou, Jun
    International Journal of Digital Content Technology and its Applications, 2012, 6 (18) : 89 - 98
  • [30] Latent Dirichlet allocation model for world trade analysis
    Kozlowski, Diego
    Semeshenko, Viktoriya
    Molinari, Andrea
    PLOS ONE, 2021, 16 (02):