AdaCrowd: Unlabeled Scene Adaptation for Crowd Counting

被引:19
|
作者
Reddy, Mahesh Kumar Krishna [1 ]
Rochan, Mrigank [2 ]
Lu, Yiwei [3 ]
Wang, Yang [2 ,4 ]
机构
[1] Simon Fraser Univ, Sch Comp Sci, Burnaby, BC V5A 1S6, Canada
[2] Univ Manitoba, Dept Comp Sci, Winnipeg, MB R3T 2N2, Canada
[3] Univ Waterloo, Dept Comp Sci, Waterloo, ON, Canada
[4] Huawei Technol Canada, Winnipeg, MB R3T 2N2, Canada
关键词
Adaptation models; Cameras; Data models; Computational modeling; Backpropagation; Training data; Training; Computer vision; crowd counting; deep learning; scene adaptation;
D O I
10.1109/TMM.2021.3062481
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We address the problem of image-based crowd counting. In particular, we propose a new problem called unlabeled scene-adaptive crowd counting. Given a new target scene, we would like to have a crowd counting model specifically adapted to this particular scene based on the target data that capture some information about the new scene. In this paper, we propose to use one or more unlabeled images from the target scene to perform the adaptation. In comparison with the existing problem setups (e.g. fully supervised), our proposed problem setup is closer to the real-world applications of crowd counting systems. We introduce a novel AdaCrowd framework to solve this problem. Our framework consists of a crowd counting network and a guiding network. The guiding network predicts some parameters in the crowd counting network based on the unlabeled images from a particular scene. This allows our model to adapt to different target scenes. The experimental results on several challenging benchmark datasets demonstrate the effectiveness of our proposed approach compared with other alternative methods. Code is available at https://github.com/maheshkkumar/adacrowd
引用
收藏
页码:1008 / 1019
页数:12
相关论文
共 50 条
  • [31] Cross-scene crowd counting based on supervised adaptive network parameters
    Li, Shufang
    Hu, Zhengping
    Zhao, Mengyao
    Bi, Shuai
    Sun, Zhe
    SIGNAL IMAGE AND VIDEO PROCESSING, 2022, 16 (08) : 2113 - 2120
  • [32] A smartly simple way for joint crowd counting and localization
    Jiang, Minyang
    Lin, Jianzhe
    Wang, Z. Jane
    NEUROCOMPUTING, 2021, 459 : 35 - 43
  • [33] Crowd Counting based on Density Map Dynamic Refinement
    Tian, Shaoli
    Sang, Jun
    Qiao, Xin
    Liu, Xinyue
    Liu, Kai
    Xia, Xiaofeng
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [34] Efficient Crowd Counting via Dual Knowledge Distillation
    Wang, Rui
    Hao, Yixue
    Hu, Long
    Li, Xianzhi
    Chen, Min
    Miao, Yiming
    Humar, Iztok
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 569 - 583
  • [35] Fully Convolutional Crowd Counting on Highly Congested Scenes
    Marsden, Mark
    McGuinness, Kevin
    Little, Suzanne
    O'Connor, Noel E.
    PROCEEDINGS OF THE 12TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISIGRAPP 2017), VOL 5, 2017, : 27 - 33
  • [36] MGFNet: Cross-scene crowd counting via multistage gated fusion network
    Liu, Yanbo
    Hu, Yingxiang
    Cao, Guo
    Shang, Yanfeng
    NEUROCOMPUTING, 2024, 607
  • [37] Feature Pyramid Networks for Crowd Counting
    Cenggoro, Tjeng Wawan
    Aslamiah, Ayu Hidayah
    Yunanto, Ardian
    4TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND COMPUTATIONAL INTELLIGENCE (ICCSCI 2019) : ENABLING COLLABORATION TO ESCALATE IMPACT OF RESEARCH RESULTS FOR SOCIETY, 2019, 157 : 175 - 182
  • [38] Crowd Counting with Segmentation Map Guidance
    Xu, Hao
    Zheng, Chengyao
    Nie, Yuncong
    Xia, Siyu
    PROCEEDINGS OF THE 38TH CHINESE CONTROL CONFERENCE (CCC), 2019, : 7716 - 7721
  • [39] Jointly attention network for crowd counting
    He, Yuqiang
    Xia, Yinfeng
    Wang, Yizhen
    Yin, Baoqun
    NEUROCOMPUTING, 2022, 487 : 157 - 171
  • [40] Neuron Linear Transformation: Modeling the Domain Shift for Crowd Counting
    Wang, Qi
    Han, Tao
    Gao, Junyu
    Yuan, Yuan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (08) : 3238 - 3250