3D visual perception for self-driving cars using a multi-camera system: Calibration, mapping, localization, and obstacle detection

被引:155
作者
Hane, Christian [1 ]
Heng, Lionel [3 ]
Lee, Gim Hee [4 ]
Fraundorfer, Friedrich [5 ]
Furgale, Paul [6 ]
Sattler, Torsten [2 ]
Pollefeys, Marc [2 ,7 ]
机构
[1] Univ Calif Berkeley, Dept Elect Engn & Comp Sci, Berkeley, CA 94720 USA
[2] Swiss Fed Inst Technol, Dept Comp Sci, Univ Str 6, CH-8092 Zurich, Switzerland
[3] DSO Natl Labs, Informat Div, 12 Sci Pk Dr, Singapore 118225, Singapore
[4] Natl Univ Singapore, Dept Comp Sci, 13 Comp Dr, Singapore 117417, Singapore
[5] Graz Univ Technol, Inst Comp Graph & Vis, Inffeldgasse 16, A-8010 Graz, Austria
[6] Swiss Fed Inst Technol, Dept Mech & Proc Engn, Leonhardstr 21, CH-8092 Zurich, Switzerland
[7] Microsoft, One Microsoft Way, Redmond, WA 98052 USA
关键词
Fisheye camera; Multi-camera system; Calibration; Mapping; Localization; Obstacle detection;
D O I
10.1016/j.imavis.2017.07.003
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction. (C) 2017 Published by Elsevier B.V.
引用
收藏
页码:14 / 27
页数:14
相关论文
共 40 条
[1]  
[Anonymous], C COMP VIS PATT REC
[2]  
[Anonymous], 2008, COMPUTER VISION IMAG
[3]  
[Anonymous], INT S ROB RES ISRR
[4]  
[Anonymous], 2014, INT C 3D VIS 3DV
[5]  
[Anonymous], IEEE C COMP VIS PATT
[6]  
[Anonymous], 2013, IEEE RSJ INT C INT R
[7]  
[Anonymous], 1981, COMMUNICATIONS ACM
[8]  
[Anonymous], 2011, Introduction to Autonomous Mobile Robots
[9]  
[Anonymous], 2001, Robotica, DOI DOI 10.1017/S0263574700223217
[10]  
[Anonymous], IEEE INT C ROB AUT I