FoveaBox: Beyound Anchor-Based Object Detection

被引:664
作者
Kong, Tao [1 ]
Sun, Fuchun [2 ]
Liu, Huaping [2 ]
Jiang, Yuning [1 ]
Li, Lei [1 ]
Shi, Jianbo [3 ]
机构
[1] ByteDance AI Lab, Beijing 100098, Peoples R China
[2] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRis, Dept Comp Sci & Lochnol, Beijing 100084, Peoples R China
[3] Univ Penn, Grasp Lab, Philadelphia, PA 19104 USA
基金
美国国家科学基金会;
关键词
Object detection; anchor free; foveabox;
D O I
10.1109/TIP.2020.3002345
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present FoveaBox, an accurate, flexible, and completely anchor-free framework for object detection. While almost all state-of-the-art object detectors utilize predefined anchors to enumerate possible locations, scales and aspect ratios for the search of the objects, their performance and generalization ability are also limited to the design of anchors. Instead, FoveaBox directly learns the object existing possibility and the bounding box coordinates without anchor reference. This is achieved by: (a) predicting category-sensitive semantic maps for the object existing possibility, and (b) producing category-agnostic bounding box for each position that potentially contains an object. The scales of target boxes are naturally associated with feature pyramid representations. In FoveaBox, an instance is assigned to adjacent feature levels to make the model more accurate.We demonstrate its effectiveness on standard benchmarks and report extensive experimental analysis. Without bells and whistles, FoveaBox achieves state-of-the-art single model performance on the standard COCO and Pascal VOC object detection benchmark. More importantly, FoveaBox avoids all computation and hyper-parameters related to anchor boxes, which are often sensitive to the final detection performance. We believe the simple and effective approach will serve as a solid baseline and help ease future research for object detection. The code has been made publicly available at https://github.com/taokong/FoveaBox.
引用
收藏
页码:7389 / 7398
页数:10
相关论文
共 74 条
  • [1] Abadi M, 2016, ACM SIGPLAN NOTICES, V51, P1, DOI [10.1145/2951913.2976746, 10.1145/3022670.2976746]
  • [2] [Anonymous], 2005, PROC CVPR IEEE
  • [3] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90
  • [4] [Anonymous], 2007, NEUROSCIENCE
  • [5] [Anonymous], 2010, INT J COMPUT VISION, DOI DOI 10.1007/s11263-009-0275-4
  • [6] [Anonymous], 2016, ARXIV161206851
  • [7] [Anonymous], 2017, P IEEE INT C COMPUTE
  • [8] Azizpour H, 2012, LECT NOTES COMPUT SC, V7572, P836, DOI 10.1007/978-3-642-33718-5_60
  • [9] Inside-Outside Net: Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks
    Bell, Sean
    Zitnick, C. Lawrence
    Bala, Kavita
    Girshick, Ross
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2874 - 2883
  • [10] Correlational spectral clustering
    Blaschko, Matthew B.
    Lampert, Christoph H.
    [J]. 2008 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-12, 2008, : 93 - +