Non-Maximum Suppression Guided Label Assignment for Object Detection in Crowd Scenes

被引:1
作者
Jiang, Hangzhi [1 ,2 ]
Zhang, Xin [3 ]
Xiang, Shiming [1 ,2 ]
机构
[1] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
[2] Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China
[3] Beijing Inst Technol, Sch Informat & Elect, Radar Res Lab, Beijing 100081, Peoples R China
关键词
Object detection; Crowd scenes; Label assignment; Non-maximum suppression; PEDESTRIAN DETECTION; PROPOSAL;
D O I
10.1109/TMM.2023.3293333
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The detection performance in crowd scenes is limited by recalling hard objects (e.g., occluded objects). It requires that this kind of objects can be successfully detected and retained by the non-maximum suppression (NMS) while controlling false positives. The existing dynamic label assignment algorithms can help recall these objects by adaptively allocating appropriate positive samples, however, they ignore the alignment with the selecting rules of NMS. This leads to the fact that detecting objects in crowd scenes are still very sensitive to the NMS threshold setting. As a result, the existing methods can only set a low NMS threshold to avoid the excessive false positives, causing some objects failed to be recalled. And these methods also generally lack more excitation for positive samples, which hinders further facilitating the recall of hard instances in crowd scenes. This article proposes a novel dynamic label assignment strategy for object detection in crowd scenes, called non-maximum suppression guided label assignment (NGLA), which aligns the assignment strategy with NMS process and learns more prominent positive samples. Following NMS, NGLA introduces the IoU between samples with their corresponding best samples to define positive and negative samples. To cooperate with NGLA, an NMS-aware loss is proposed to dynamically assign sample weights when supervising sample predictions, which also considers the IoU with the best sample. In addition, for better classification prediction, a regression assisted classification branch is designed to help detectors perceive the relation between the regression predictions of each sample and the corresponding best sample. Experiments demonstrate that NGLA outperforms other label assignment methods on CrowdHuman and Citypersons, and is less sensitive to the NMS threshold in crowd scenes.
引用
收藏
页码:2207 / 2218
页数:12
相关论文
共 62 条
  • [1] Cao XP, 2022, AAAI CONF ARTIF INTE, P185
  • [2] Carion N., 2020, LNCS, V12346, P213, DOI [DOI 10.1007/978-3-030-58452-813, 10.1007/978- 3- 030-58452-8 13, DOI 10.1007/978-3-030-58452-8_13]
  • [3] Beyond triplet loss: a deep quadruplet network for person re-identification
    Chen, Weihua
    Chen, Xiaotang
    Zhang, Jianguo
    Huang, Kaiqi
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1320 - 1329
  • [4] Chi C, 2020, AAAI CONF ARTIF INTE, V34, P10639
  • [5] Detection in Crowded Scenes: One Proposal, Multiple Predictions
    Chu, Xuangeng
    Zheng, Anlin
    Zhang, Xiangyu
    Sun, Jian
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, : 12211 - 12220
  • [6] Dai JF, 2016, ADV NEUR IN, V29
  • [7] Deformable Convolutional Networks
    Dai, Jifeng
    Qi, Haozhi
    Xiong, Yuwen
    Li, Yi
    Zhang, Guodong
    Hu, Han
    Wei, Yichen
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 764 - 773
  • [8] Histograms of oriented gradients for human detection
    Dalal, N
    Triggs, B
    [J]. 2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, : 886 - 893
  • [9] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [10] Pedestrian Detection: An Evaluation of the State of the Art
    Dollar, Piotr
    Wojek, Christian
    Schiele, Bernt
    Perona, Pietro
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (04) : 743 - 761