SPCANet: congested crowd counting via strip pooling combined attention network

被引:0
作者
Yuan, Zhongyuan [1 ]
机构
[1] Hunan Agr Univ, Coll Informat & Intelligence, Changsha, Hunan, Peoples R China
关键词
Crowd counting; Convolutional neural network; Spatial pooling; Channel attention;
D O I
10.7717/peerj-cs.2273
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Crowd counting aims to estimate the number and distribution of the population in crowded places, which is an important research direction in object counting. It is widely used in public place management, crowd behavior analysis, and other scenarios, showing its robust practicality. In recent years, crowd-counting technology has been developing rapidly. However, in highly crowded and noisy scenes, the counting effect of most models is still seriously affected by the distortion of view angle, dense occlusion, and inconsistent crowd distribution. Perspective distortion causes crowds to appear in different sizes and shapes in the image, and dense occlusion and inconsistent crowd distributions result in parts of the crowd not being captured completely. This ultimately results in the imperfect capture of spatial information in the model. To solve such problems, we propose a strip pooling combined attention (SPCANet) network model based on normed-deformable convolution (NDConv). We model longdistance dependencies more efficiently by introducing strip pooling. In contrast to traditional square kernel pooling, strip pooling uses long and narrow kernels (1xN or Nx1) to deal with dense crowds, mutual occlusion, and overlap. Efficient channel attention (ECA), a mechanism for learning channel attention using a local crosschannel interaction strategy, is also introduced in SPCANet. This module generates channel attention through a fast 1D convolution to reduce model complexity while improving performance as much as possible. Four mainstream datasets, Shanghai Tech Part A, Shanghai Tech Part B, UCF-QNRF, and UCF CC 50, were utilized in extensive experiments, and mean absolute error (MAE) exceeds the baseline, which is 60.9, 7.3, 90.8, and 161.1, validating the effectiveness of SPCANet. Meanwhile, mean squared error (MSE) decreases by 5.7% on average over the four datasets, and the robustness is greatly improved.
引用
收藏
页数:21
相关论文
共 47 条
  • [11] Deformable Convolutional Networks
    Dai, Jifeng
    Qi, Haozhi
    Xiong, Yuwen
    Li, Yi
    Zhang, Guodong
    Hu, Han
    Wei, Yichen
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 764 - 773
  • [12] Histograms of oriented gradients for human detection
    Dalal, N
    Triggs, B
    [J]. 2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, : 886 - 893
  • [13] Pedestrian Detection: An Evaluation of the State of the Art
    Dollar, Piotr
    Wojek, Christian
    Schiele, Bernt
    Perona, Pietro
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (04) : 743 - 761
  • [14] Object Detection with Discriminatively Trained Part-Based Models
    Felzenszwalb, Pedro F.
    Girshick, Ross B.
    McAllester, David
    Ramanan, Deva
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2010, 32 (09) : 1627 - 1645
  • [15] Ge Weina, 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), P2913, DOI 10.1109/CVPRW.2009.5206621
  • [16] Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
  • [17] Extremely Overlapping Vehicle Counting
    Guerrero-Gomez-Olmedo, Ricardo
    Torre-Jimenez, Beatriz
    Lopez-Sastre, Roberto
    Maldonado-Bascon, Saturnino
    Onoro-Rubio, Daniel
    [J]. PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2015), 2015, 9117 : 423 - 431
  • [18] Ha Kyoo-Man, 2023, F1000Res, V12, P829, DOI 10.12688/f1000research.135265.2
  • [19] Strip Pooling: Rethinking Spatial Pooling for Scene Parsing
    Hou, Qibin
    Zhang, Li
    Cheng, Ming-Ming
    Feng, Jiashi
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 4002 - 4011
  • [20] Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/CVPR.2018.00745, 10.1109/TPAMI.2019.2913372]