Reasonable object detection guided by knowledge of global context and category relationship

被引:5
作者
Ji, Haoqin
Ye, Kai
Wan, Qi
Shen, Linlin [1 ]
机构
[1] Shenzhen Univ, Sch Comp Sci & Software Engn, Comp Vis Inst, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
Object detection; Prior knowledge; Graph Convolutional Network;
D O I
10.1016/j.eswa.2022.118285
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The mainstream object detectors usually treat each region separately, which overlooks the important global context information and the associations between object categories. Existing methods model global context via attention mechanism, which requires ad hoc design and prior knowledge. Some works combine CNN features with label dependencies learned from a pre-defined graph and word embeddings, which ignore the gap between visual features and textual corpus and are usually task-specific (depend on RoIPool/RoIAlign). In order to get rid of the previous specific settings, and enable different types of detectors to refine detection results with the help of prior knowledge, in this paper, we propose KROD (Knowledge-guided Reasonable Object Detection), which consists of the GKM (Global Category Knowledge Mining) module and CRM (Category Relationship Knowledge Mining) module, to improve detection performance by mimicking the processes of human reasoning. For a given image, GKM introduces global category knowledge into the detector by simply attaching a multi-label image classification branch to the backbone. Meanwhile, CRM input the raw detection outputs to the object category co-occurrence based knowledge graph to further refine the original results, with the help of GCN (Graph Convolutional Network). We also propose a novel loss-aware module to distinctively correct the classification probability of different detected boxes. Without bells and whistles, extensive experiments show that the proposed KROD can improve different baseline models (both anchor-based and anchor-free) by a large margin (1.2% similar to 1.8% higher AP) with marginal loss of efficiency on MS COCO.
引用
收藏
页数:11
相关论文
共 61 条
[31]  
Paszke A, 2019, ADV NEUR IN, V32
[32]  
Pennington J., 2014, P 2014 C EMP METH NA, P1532, DOI 10.3115/v1/D14-1162
[33]  
Qiao S., 2020, ARXIV200602334, DOI [10.48550/arXiv.2006.02334, DOI 10.48550/ARXIV.2006.02334]
[34]  
Redmon J, 2018, Arxiv, DOI arXiv:1804.02767
[35]   You Only Look Once: Unified, Real-Time Object Detection [J].
Redmon, Joseph ;
Divvala, Santosh ;
Girshick, Ross ;
Farhadi, Ali .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :779-788
[36]   Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks [J].
Ren, Shaoqing ;
He, Kaiming ;
Girshick, Ross ;
Sun, Jian .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (06) :1137-1149
[37]   ImageNet Large Scale Visual Recognition Challenge [J].
Russakovsky, Olga ;
Deng, Jia ;
Su, Hao ;
Krause, Jonathan ;
Satheesh, Sanjeev ;
Ma, Sean ;
Huang, Zhiheng ;
Karpathy, Andrej ;
Khosla, Aditya ;
Bernstein, Michael ;
Berg, Alexander C. ;
Fei-Fei, Li .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 115 (03) :211-252
[38]   The Graph Neural Network Model [J].
Scarselli, Franco ;
Gori, Marco ;
Tsoi, Ah Chung ;
Hagenbuchner, Markus ;
Monfardini, Gabriele .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (01) :61-80
[39]   Revisiting the Sibling Head in Object Detector [J].
Song, Guanglu ;
Liu, Yu ;
Wang, Xiaogang .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :11560-11569
[40]   Sparse R-CNN: End-to-End Object Detection with Learnable Proposals [J].
Sun, Peize ;
Zhang, Rufeng ;
Jiang, Yi ;
Kong, Tao ;
Xu, Chenfeng ;
Zhan, Wei ;
Tomizuka, Masayoshi ;
Li, Lei ;
Yuan, Zehuan ;
Wang, Changhu ;
Luo, Ping .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :14449-14458