Floor-Field-Guided Neural Model for Crowd Counting

被引:0
|
作者
Habara, Takehiro [1 ]
Kojima, Ryosuke [2 ]
机构
[1] Kyoto Univ, Grad Sch Informat, Kyoto 6068501, Japan
[2] Kyoto Univ, Grad Sch Med, Kyoto 6068501, Japan
来源
IEEE ACCESS | 2024年 / 12卷
基金
日本学术振兴会;
关键词
Neural networks; Estimation; Adaptation models; Computational modeling; Videos; Automata; Training; Crowdsourcing; Density measurement; Crowd counting; deep learning; followability; static/dynamic floor field models; CELLULAR-AUTOMATON MODEL; NETWORK;
D O I
10.1109/ACCESS.2024.3483252
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Crowd counting and density estimation are the principal objectives of crowd analysis, which offer significant applications in surveillance, event management, and traffic design. In the field of crowd flow, including simulations, the dynamics of crowd movement exhibit characteristics such as followability and, thus, are categorized under a distinct flow paradigm. The recent advancements in deep learning have propelled the usage of neural networks tailored for crowd counting and density estimation from video feeds. Nonetheless, prior models did not consider crowd dynamics. This study proposes a novel method that combines neural networks with crowd dynamics. Specifically, we introduced a new penalty term that represents prior knowledge of crowd dynamics and refined the neural network outputs via static/dynamic floor field models, and grid-based crowd dynamics models. Empirical evaluation on benchmark datasets demonstrated the superiority of the proposed method over existing state-of-the-art techniques. Further analysis of each scene confirmed that the crowd counting performance is highly dependent on the scene, and the impact of the three methodological components (i.e., the penalty term and the two-floor fields) on performance varies across scenes. In particular, the floor-field model tended to be more effective when there were no significant changes in the scene. Our code is available on GitHub. https://github.com/hanebarla/ FF-guided-NeuralCC
引用
收藏
页码:154888 / 154900
页数:13
相关论文
共 50 条
  • [21] Mask Guided GAN for Density Estimation and Crowd Counting
    Yao, Hai-Yan
    Wan, Wang-Gen
    Li, Xiang
    IEEE ACCESS, 2020, 8 : 31432 - 31443
  • [22] Concise Convolutional Neural Network for Crowd Counting
    Tong, Feifei
    Zhang, Zhaoyang
    Wang, Huan
    Wang, Yuehai
    2018 10TH INTERNATIONAL CONFERENCE ON ADVANCED INFOCOMM TECHNOLOGY (ICAIT), 2018, : 174 - 178
  • [23] Current researches and trends of crowd counting in the field of deep learning
    Li, Zhi
    Li, Yong
    Wang, Xipeng
    PROCEEDINGS OF 2020 IEEE 4TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2020), 2020, : 1326 - 1329
  • [24] Dual convolutional neural network for crowd counting
    Huaping Guo
    Rui Wang
    Li Zhang
    Yange Sun
    Multimedia Tools and Applications, 2024, 83 : 26687 - 26709
  • [25] One Shot Crowd Counting with Deep Scale Adaptive Neural Network
    Wu, Junfeng
    Li, Zhiyang
    Qu, Wenyu
    Zhou, Yizhi
    ELECTRONICS, 2019, 8 (06)
  • [26] A survey of crowd counting and density estimation based on convolutional neural network
    Fan, Zizhu
    Zhang, Hong
    Zhang, Zheng
    Lu, Guangming
    Zhang, Yudong
    Wang, Yaowei
    NEUROCOMPUTING, 2022, 472 : 224 - 251
  • [27] Advances in Convolution Neural Networks Based Crowd Counting and Density Estimation
    Gouiaa, Rafik
    Akhloufi, Moulay A.
    Shahbazi, Mozhdeh
    BIG DATA AND COGNITIVE COMPUTING, 2021, 5 (04)
  • [28] A floor field real-coded lattice gas model for crowd evacuation
    Tao, Y. Z.
    Dong, L. Y.
    EPL, 2017, 119 (01)
  • [29] Improved Dense Crowd Counting Method based on Residual Neural Network
    Shi J.
    Zhou L.
    Lv G.
    Lin B.
    Journal of Geo-Information Science, 2021, 23 (09): : 1537 - 1547
  • [30] DARN: Crowd Counting Network Guided by Double Attention Refinement
    Chang, Shuhan
    Zhong, Shan
    Zhou, Lifan
    Zhou, Xuanyu
    Gong, Shengrong
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT X, 2024, 14434 : 444 - 456