Striking a Balance: Unsupervised Cross-Domain Crowd Counting via Knowledge Diffusion

被引:5
作者
Xie, Haiyang [1 ]
Yang, Zhengwei [1 ]
Zhu, Huilin [2 ]
Wang, Zheng [1 ]
机构
[1] Wuhan Univ, Sch Comp Sci, Natl Engn Res Ctr Multimedia Software, Wuhan, Peoples R China
[2] Wuhan Univ Technol, Sch Comp Sci & Artificial Intelligence, Wuhan, Peoples R China
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
基金
中国国家自然科学基金;
关键词
Crowd Counting; Unsupervised Domain Adaptation; Knowledge Diffusion; Uncertainty;
D O I
10.1145/3581783.3611797
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Supervised crowd counting relies on manual labeling, which is costly and time-consuming. This led to an increased interest in unsupervised methods. However, there is a significant domain gap issue in unsupervised methods, which is manifested by a model trained on one dataset serving dramatic performance drops when being transferred to another. This phenomenon can be attributed to the diverse domain knowledge making it difficult for the unsupervised models to transfer between general (e.g., similar distribution) and domain-specific (e.g., unique density, perspective, illumination, etc.) knowledge, leading to knowledge bias. Existing methods focus on exploring distinguishable relationships and establishing connections between the source and target domains. However, the similar knowledge transfer cannot perfectly simulate the contents of the target domain, leading to the model's inability to generalize to domain-specific knowledge. In this paper, we propose a Self-awareness Knowledge Diffusion method (SaKnD) that leverages the self-knowledge without establishing cross-domain knowledge relationships, which aims to balance the knowledge bias between general and domain-specific knowledge. Specifically, we propose a strategy to evaluate the uncertainty and consistency to define the clueless and informed areas, which determine the location and orientation of knowledge diffusion. These clueless areas serve as domain-specific knowledge that needs to be optimized, and these informed areas serve as general knowledge across domains. Extensive experiments on three standard crowd-counting benchmarks, ShanghaiTech PartA, ShanghaiTech PartB, and UCF_QNRF, show that SaKnD achieves state-of-the-art performance.
引用
收藏
页码:6520 / 6529
页数:10
相关论文
共 53 条
  • [1] [Anonymous], 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence, DOI DOI 10.1109/TPAMI.2011.155
  • [2] Arteta C, 2014, LECT NOTES COMPUT SC, V8691, P504, DOI 10.1007/978-3-319-10578-9_33
  • [3] Decoupled Two-Stage Crowd Counting and Beyond
    Cheng, Jian
    Xiong, Haipeng
    Cao, Zhiguo
    Lu, Hao
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 2862 - 2875
  • [4] Improving the Learning of Multi-column Convolutional Neural Network for Crowd Counting
    Cheng, Zhi-Qi
    Li, Jun-Xiu
    Dai, Qi
    Wu, Xiao
    He, Jun-Yan
    Hauptmann, Alexander G.
    [J]. PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 1897 - 1906
  • [5] Part-based Pseudo Label Refinement for Unsupervised Person Re-identification
    Cho, Yoonki
    Kim, Woo Jae
    Hong, Seunghoon
    Yoon, Sung-Eui
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7298 - 7308
  • [6] An Aggregated Multicolumn Dilated Convolution Network for Perspective-Free Counting
    Deb, Diptodip
    Ventura, Jonathan
    [J]. PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, : 308 - 317
  • [7] Du ZP, 2023, AAAI CONF ARTIF INTE, P561
  • [8] Masksembles for Uncertainty Estimation
    Durasov, Nikita
    Bagautdinov, Timur
    Baque, Pierre
    Fua, Pascal
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 13534 - 13543
  • [9] Gal Y, 2016, PR MACH LEARN RES, V48
  • [10] Domain-Adaptive Crowd Counting via High-Quality Image Translation and Density Reconstruction
    Gao, Junyu
    Han, Tao
    Yuan, Yuan
    Wang, Qi
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) : 4803 - 4815