Augmented Reality Meets Computer Vision: Efficient Data Generation for Urban Driving Scenes

被引:281
作者
Abu Alhaija, Hassan [1 ]
Mustikovela, Siva Karthik [1 ]
Mescheder, Lars [2 ]
Geiger, Andreas [2 ,3 ]
Rother, Carsten [1 ]
机构
[1] Heidelberg Univ, Visual Learning Lab, Heidelberg, Germany
[2] MPI IS Tubingen, Autonomous Vis Grp, Tubingen, Germany
[3] Swiss Fed Inst Technol, Comp Vis & Geometry Grp, Zurich, Switzerland
基金
欧盟地平线“2020”; 欧洲研究理事会;
关键词
Synthetic training data; Data augmentation; Autonomous driving; Instance segmentation; Object detection;
D O I
10.1007/s11263-018-1070-x
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The success of deep learning in computer vision is based on the availability of large annotated datasets. To lower the need for hand labeled images, virtually rendered 3D worlds have recently gained popularity. Unfortunately, creating realistic 3D content is challenging on its own and requires significant human effort. In this work, we propose an alternative paradigm which combines real and synthetic data for learning semantic instance segmentation and object detection models. Exploiting the fact that not all aspects of the scene are equally important for this task, we propose to augment real-world imagery with virtual objects of the target category. Capturing real-world images at large scale is easy and cheap, and directly provides real background appearances without the need for creating complex 3D models of the environment. We present an efficient procedure to augment these images with virtual objects. In contrast to modeling complete 3D environments, our data augmentation approach requires only a few user interactions in combination with 3D models of the target object category. Leveraging our approach, we introduce a novel dataset of augmented urban driving scenes with 360 degree images that are used as environment maps to create realistic lighting and reflections on rendered objects. We analyze the significance of realistic object placement by comparing manual placement by humans to automatic methods based on semantic scene analysis. This allows us to create composite images which exhibit both realistic background appearance as well as a large number of complex object arrangements. Through an extensive set of experiments, we conclude the right set of parameters to produce augmented data which can maximally enhance the performance of instance segmentation models. Further, we demonstrate the utility of the proposed approach on training standard deep models for semantic instance segmentation and object detection of cars in outdoor driving scenarios. We test the models trained on our augmented data on the KITTI 2015 dataset, which we have annotated with pixel-accurate ground truth, and on the Cityscapes dataset. Our experiments demonstrate that the models trained on augmented imagery generalize better than those trained on fully synthetic data or models trained on limited amounts of annotated real data.
引用
收藏
页码:961 / 972
页数:12
相关论文
共 33 条
  • [1] [Anonymous], P IEEE C COMP VIS PA
  • [2] [Anonymous], P BRIT MACH VIS C BM
  • [3] [Anonymous], ARXIV161200881
  • [4] [Anonymous], ARXIV170101370
  • [5] Blender Online Community, 2006, BLEND 3D MOD REND PA
  • [6] Semantic object classes in video: A high-definition ground truth database
    Brostow, Gabriel J.
    Fauqueur, Julien
    Cipolla, Roberto
    [J]. PATTERN RECOGNITION LETTERS, 2009, 30 (02) : 88 - 97
  • [7] Chen W., 2016, P INT C 3D VIS 3DV
  • [8] Total recall: Automatic query expansion with a generative feature model for object retrieval
    Chum, Ondrej
    Philbin, James
    Sivic, Josef
    Isard, Michael
    Zisserman, Andrew
    [J]. 2007 IEEE 11TH INTERNATIONAL CONFERENCE ON COMPUTER VISION, VOLS 1-6, 2007, : 496 - +
  • [9] Cipolla R., 2016, CORR
  • [10] The Cityscapes Dataset for Semantic Urban Scene Understanding
    Cordts, Marius
    Omran, Mohamed
    Ramos, Sebastian
    Rehfeld, Timo
    Enzweiler, Markus
    Benenson, Rodrigo
    Franke, Uwe
    Roth, Stefan
    Schiele, Bernt
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 3213 - 3223