Human behaviour analysis based on spatio-temporal dual-stream heterogeneous convolutional neural network

被引:1
|
作者
Ye, Qing [1 ]
Zhao, Yuqi [1 ]
Zhong, Haoxin [2 ]
机构
[1] North China Univ Technol, Sch Informat Sci & Technol, Beijing 100144, Peoples R China
[2] State Grid Beijing Elect Power Construct Engn Cons, 188 Chengshousi Rd, Beijing 100164, Peoples R China
关键词
human behaviour analysis; STDNet; optical flow; feature extraction; dual-stream network; RECOGNITION;
D O I
10.1504/IJCSE.2023.135277
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
At present, there are still many problems to be solved in human behaviour analysis, such as insufficient utilisation of behaviour characteristic information and slow operation rate. We propose a human behaviour analysis algorithm based on spatio-temporal dual-stream heterogeneous convolutional neural network (STDNet). The algorithm is improved on the basic structure of the traditional dual-stream network. When extracting spatial information, the DenseNet uses a hierarchical connection method to construct a dense network to extract the spatial feature of the video RGB image. When extracting motion information, BNInception is used to extract temporal features of video optical flow images. Finally, feature fusion is carried out by multi-layer perceptron and sent to Softmax classifier for classification. Experimental results on the UCF101 dataset show that the algorithm can effectively use the spatio-temporal feature information in video, reduce the amount of calculation of the network model, and greatly improve the ability to distinguish similar actions.
引用
收藏
页码:673 / 683
页数:12
相关论文
共 50 条
  • [21] Recommendations based on a heterogeneous spatio-temporal social network
    Pavlos Kefalas
    Panagiotis Symeonidis
    Yannis Manolopoulos
    World Wide Web, 2018, 21 : 345 - 371
  • [22] A dual-stream temporal convolutional network for remaining useful life prediction of rolling bearings
    Zhang, Yazhou
    Zhao, Xiaoqiang
    Xu, Rongrong
    Peng, Zhenrui
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2025, 36 (01)
  • [23] Recommendations based on a heterogeneous spatio-temporal social network
    Kefalas, Pavlos
    Symeonidis, Panagiotis
    Manolopoulos, Yannis
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2018, 21 (02): : 345 - 371
  • [24] A Spatio-temporal Fully Convolutional Recurrent Neural Network Based Surface Topography Prediction
    Shao Y.
    Tan J.
    Lu J.
    Jixie Gongcheng Xuebao/Journal of Mechanical Engineering, 2021, 57 (20): : 292 - 304
  • [25] EEG-Based Spatio-Temporal Convolutional Neural Network for Driver Fatigue Evaluation
    Gao, Zhongke
    Wang, Xinmin
    Yang, Yuxuan
    Mu, Chaoxu
    Cai, Qing
    Dang, Weidong
    Zuo, Siyang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (09) : 2755 - 2763
  • [26] The Complex Convolutional Neural Network for Adaptive Spatio-Temporal Broadband Beamforming
    Wu, Xun
    Xue, Cong
    Zhang, Shurui
    Zhu, Hairui
    Han, Yubing
    Sheng, Weixing
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (07) : 10778 - 10782
  • [27] Convolutional Neural Network for Cooperative Spectrum Sensing with Spatio-Temporal Dataset
    Shachi, P.
    Sudhindra, K. R.
    Suma, M. N.
    2020 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND SIGNAL PROCESSING (AISP), 2020,
  • [28] PedNet: A Spatio-Temporal Deep Convolutional Neural Network for Pedestrian Segmentation
    Ullah, Mohib
    Mohammed, Ahmed
    Cheikh, Faouzi Alaya
    JOURNAL OF IMAGING, 2018, 4 (09)
  • [29] GDSNet: A gated dual-stream convolutional neural network for automatic recognition of coseismic landslides
    Wang, Xuewen
    Wang, Xianmin
    Zheng, Yuchen
    Liu, Zhiwei
    Xia, Wenxiang
    Guo, Haixiang
    Li, Dongdong
    INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION, 2024, 127
  • [30] Cross-Database Micro-Expression Recognition Based on a Dual-Stream Convolutional Neural Network
    Song, Baolin
    Zong, Yuan
    Li, Ke
    Zhu, Jie
    Shi, Jingang
    Zhao, Li
    IEEE ACCESS, 2022, 10 : 66227 - 66237