3D skeleton-based action recognition with convolutional neural networks

被引:8
|
作者
Van-Nam Hoang [2 ]
Thi-Lan Le [2 ]
Thanh-Hai Tran [2 ]
Hai-Vu [2 ]
Van-Toi Nguyen [1 ]
机构
[1] Posts & Telecommun Inst Technol, Ho Chi Minh City, Vietnam
[2] Hanoi Univ Sci & Technol, MICA Int Res Inst, Grenoble INP, CNRS,UMI2954, Hanoi, Vietnam
来源
2019 INTERNATIONAL CONFERENCE ON MULTIMEDIA ANALYSIS AND PATTERN RECOGNITION (MAPR) | 2019年
关键词
action recognition; 3d skeleton; CNN; LSTM;
D O I
10.1109/mapr.2019.8743545
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Activity recognition based on skeletons has drawn a lot of attention due to its wide applications in human-computer interaction, surveillance system. Compare with image data, a skeleton has a benefit of the robustness with background changing and computing efficiently dues to its low dimensional representation. With the rise of deep neural networks, a lot of works has been applied using both CNN and LSTM networks to solve this problem. In this paper, we proposed a framework for action recognition using skeleton data and evaluate it with different network architectures. We first modify the feature representation by adding motion information to a skeleton image, which gives useful information to the networks. After that, different networks architectures have been employed and evaluated to give insight into how well it will perform on this kind of data. Finally, we evaluated the system on two public datasets NTU-RGB+D and CMDFall to show the efficiency and feasibility of the system. The proposed method achieves 76.8% and 45.23% on NTU-RGB+D and CMDFall, respectively, which is competitive results.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] SKELETON-BASED ACTION RECOGNITION WITH CONVOLUTIONAL NEURAL NETWORKS
    Li, Chao
    Zhong, Qiaoyong
    Xie, Di
    Pu, Shiliang
    2017 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2017,
  • [2] Skeleton-Based Action Recognition With Gated Convolutional Neural Networks
    Cao, Congqi
    Lan, Cuiling
    Zhang, Yifan
    Zeng, Wenjun
    Lu, Hanqing
    Zhang, Yanning
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (11) : 3247 - 3257
  • [3] Graph Edge Convolutional Neural Networks for Skeleton-Based Action Recognition
    Zhang, Xikun
    Xu, Chang
    Tian, Xinmei
    Tao, Dacheng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (08) : 3047 - 3060
  • [4] Fast 3D-graph convolutional networks for skeleton-based action recognition
    Zhang, Guohao
    Wen, Shuhuan
    Li, Jiaqi
    Che, Haijun
    APPLIED SOFT COMPUTING, 2023, 145
  • [5] Action Tree Convolutional Networks: Skeleton-Based Human Action Recognition
    Liu, Wenjie
    Zhang, Ziyi
    Han, Bing
    Zhu, Chenhui
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING, PT III, 2018, 11166 : 783 - 792
  • [6] Selective Hypergraph Convolutional Networks for Skeleton-based Action Recognition
    Zhu, Yiran
    Huang, Guangji
    Xu, Xing
    Ji, Yanli
    Shen, Fumin
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2022, 2022, : 518 - 526
  • [7] Skeleton-based Action Recognition with Lie Group and Deep Neural Networks
    Li, Yanshan
    Guo, Tianyu
    Liu, Xing
    Xia, Rongjie
    2019 IEEE 4TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING (ICSIP 2019), 2019, : 26 - 30
  • [8] Temporal segment graph convolutional networks for skeleton-based action recognition
    Ding, Chongyang
    Wen, Shan
    Ding, Wenwen
    Liu, Kai
    Belyaev, Evgeny
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 110
  • [9] Information Enhanced Graph Convolutional Networks for Skeleton-based Action Recognition
    Sun, Dengdi
    Zeng, Fanchen
    Luo, Bin
    Tang, Jin
    Ding, Zhuanlian
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [10] Learning Clip Representations for Skeleton-Based 3D Action Recognition
    Ke, Qiuhong
    Bennamoun, Mohammed
    An, Senjian
    Sohel, Ferdous
    Boussaid, Farid
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (06) : 2842 - 2855