Real-time human action prediction using pose estimation with attention-based LSTM network

被引:0
作者
A. Bharathi
Rigved Sanku
M. Sridevi
S. Manusubramanian
S. Kumar Chandar
机构
[1] National Institute of Technology,Liquid Propulsion Systems Centre
[2] ISRO,undefined
[3] Christ University,undefined
来源
Signal, Image and Video Processing | 2024年 / 18卷
关键词
Skeleton key joints; Attention mechanism; LSTM; Pose estimation;
D O I
暂无
中图分类号
学科分类号
摘要
Human action prediction in a live-streaming videos is a popular task in computer vision and pattern recognition. This attempts to identify activities in an image or video performed by a human. Artificial intelligence(AI)-based technologies are now required for the security and human behaviour analysis. Intricate motion patterns are involved in these actions. For the visual representation of video frames, conventional action identification approaches mostly rely on pre-trained weights of various AI architectures. This paper proposes a deep neural network called Attention-based long short-term memory (LSTM) network for skeletal based activity prediction from a video. The proposed model has been evaluated on the ‘BerkeleyMHAD’ dataset having 11 action classes. Our experimental results are compared against the performance of the LSTM and Attention-based LSTM network for 6 action classes such as Jumping, Clapping, Stand-up, Sit-down, Waving one hand (Right) and Waving two hands. Also, the proposed method has been tested in a real-time environment unaffected by the pose, camera facing, and apparel. The proposed system has attained an accuracy of 95.94% on ‘BerkeleyMHAD’ dataset. Hence, the proposed method is useful in an intelligent vision computing system for automatically identifying human activity in unpremeditated behaviour.
引用
收藏
页码:3255 / 3264
页数:9
相关论文
共 50 条
  • [41] Attention-based gating optimization network for multivariate time series prediction
    Geng, Xiulin
    He, Xiaoyu
    Xu, Lingyu
    Yu, Jie
    APPLIED SOFT COMPUTING, 2022, 126
  • [42] Air pollution forecasting based on attention-based LSTM neural network and ensemble learning
    Liu, Duen-Ren
    Lee, Shin-Jye
    Huang, Yang
    Chiu, Chien-Ju
    EXPERT SYSTEMS, 2020, 37 (03)
  • [43] Human Pose Estimation Based on Attention Multi-resolution Network
    Zhang, Congcong
    He, Ning
    Sun, Qixiang
    Yin, Xiaojie
    Lu, Ke
    PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL (ICMR '21), 2021, : 682 - 687
  • [44] Optimization-based excavator pose estimation using real-time location systems
    Vahdatikhaki, Faridaddin
    Hammad, Amin
    Siddiqui, Hassaan
    AUTOMATION IN CONSTRUCTION, 2015, 56 : 76 - 92
  • [45] Real-time prediction of logging parameters during the drilling process using an attention-based Seq2Seq model
    Zhang, Rui
    Zhang, Chengkai
    Song, Xianzhi
    Li, Zukui
    Su, Yinao
    Li, Gensheng
    Zhu, Zhaopeng
    GEOENERGY SCIENCE AND ENGINEERING, 2024, 233
  • [46] Real-Time Multi-Camera Multi-Person Action Recognition using Pose Estimation
    Phang, Jonathan Then Sien
    Lim, King Hann
    PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND SOFT COMPUTING (ICMLSC 2019), 2019, : 175 - 180
  • [47] SP-YOLO: an end-to-end lightweight network for real-time human pose estimation
    Yuting Zhang
    Zongyan Wang
    Menglong Li
    Pei Gao
    Signal, Image and Video Processing, 2024, 18 : 863 - 876
  • [48] Classification of Hand Movements From EEG Using a Deep Attention-Based LSTM Network
    Zhang, Guangyi
    Davoodnia, Vandad
    Sepas-Moghaddam, Alireza
    Zhang, Yaoxue
    Etemad, Ali
    IEEE SENSORS JOURNAL, 2020, 20 (06) : 3113 - 3122
  • [49] SP-YOLO: an end-to-end lightweight network for real-time human pose estimation
    Zhang, Yuting
    Wang, Zongyan
    Li, Menglong
    Gao, Pei
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (01) : 863 - 876
  • [50] An Attention-based Deep Network for CTR Prediction
    Zhang, Hailong
    Yan, Jinyao
    Zhang, Yuan
    ICMLC 2020: 2020 12TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND COMPUTING, 2018, : 1 - 5