Reasonable object detection guided by knowledge of global context and category relationship

被引:5
作者
Ji, Haoqin
Ye, Kai
Wan, Qi
Shen, Linlin [1 ]
机构
[1] Shenzhen Univ, Sch Comp Sci & Software Engn, Comp Vis Inst, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
Object detection; Prior knowledge; Graph Convolutional Network;
D O I
10.1016/j.eswa.2022.118285
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The mainstream object detectors usually treat each region separately, which overlooks the important global context information and the associations between object categories. Existing methods model global context via attention mechanism, which requires ad hoc design and prior knowledge. Some works combine CNN features with label dependencies learned from a pre-defined graph and word embeddings, which ignore the gap between visual features and textual corpus and are usually task-specific (depend on RoIPool/RoIAlign). In order to get rid of the previous specific settings, and enable different types of detectors to refine detection results with the help of prior knowledge, in this paper, we propose KROD (Knowledge-guided Reasonable Object Detection), which consists of the GKM (Global Category Knowledge Mining) module and CRM (Category Relationship Knowledge Mining) module, to improve detection performance by mimicking the processes of human reasoning. For a given image, GKM introduces global category knowledge into the detector by simply attaching a multi-label image classification branch to the backbone. Meanwhile, CRM input the raw detection outputs to the object category co-occurrence based knowledge graph to further refine the original results, with the help of GCN (Graph Convolutional Network). We also propose a novel loss-aware module to distinctively correct the classification probability of different detected boxes. Without bells and whistles, extensive experiments show that the proposed KROD can improve different baseline models (both anchor-based and anchor-free) by a large margin (1.2% similar to 1.8% higher AP) with marginal loss of efficiency on MS COCO.
引用
收藏
页数:11
相关论文
共 61 条
  • [11] Relation Networks for Object Detection
    Hu, Han
    Gu, Jiayuan
    Zhang, Zheng
    Dai, Jifeng
    Wei, Yichen
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3588 - 3597
  • [12] Hu J., 2018, ARXIV181012348, DOI [10.48550/arXiv.1810.12348, DOI 10.48550/ARXIV.1810.12348]
  • [13] Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/TPAMI.2019.2913372, 10.1109/CVPR.2018.00745]
  • [14] Jiang C., 2018, ARXIV181012681, DOI [10.48550/arXiv.1810.12681, DOI 10.48550/ARXIV.1810.12681]
  • [15] Jiang H, 2018, PROCEEDINGS OF 2018 IEEE 4TH INFORMATION TECHNOLOGY AND MECHATRONICS ENGINEERING CONFERENCE (ITOEC 2018), P787, DOI 10.1109/ITOEC.2018.8740442
  • [16] Jiaxi Wu, 2020, Computer Vision - ECCV 2020 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12361), P456, DOI 10.1007/978-3-030-58517-4_27
  • [17] Joulin A., 2016, ARXIV161203651, DOI [10.48550/arXiv.1612.03651, DOI 10.48550/ARXIV.1612.03651]
  • [18] Kipf T. N., 2017, P INT C LEARN REPR, P1
  • [19] Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations
    Krishna, Ranjay
    Zhu, Yuke
    Groth, Oliver
    Johnson, Justin
    Hata, Kenji
    Kravitz, Joshua
    Chen, Stephanie
    Kalantidis, Yannis
    Li, Li-Jia
    Shamma, David A.
    Bernstein, Michael S.
    Li Fei-Fei
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2017, 123 (01) : 32 - 73
  • [20] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90