Depth from a Motion Algorithm and a Hardware Architecture for Smart Cameras

被引:8
作者
Aguilar-Gonzalez, Abiel [1 ,2 ]
Arias-Estrada, Miguel [1 ]
Berry, Francois [2 ]
机构
[1] INAOE, Tonantzintla 72840, Mexico
[2] UCA, Inst Pascal, F-63178 Clermont Ferrand, France
关键词
depth estimation; monocular systems; optical flow; smart cameras; FPGA (Field Programmable Gate Array); ROBOTICS; VISION;
D O I
10.3390/s19010053
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Applications such as autonomous navigation, robot vision, and autonomous flying require depth map information of a scene. Depth can be estimated by using a single moving camera (depth from motion). However, the traditional depth from motion algorithms have low processing speeds and high hardware requirements that limit the embedded capabilities. In this work, we propose a hardware architecture for depth from motion that consists of a flow/depth transformation and a new optical flow algorithm. Our optical flow formulation consists in an extension of the stereo matching problem. A pixel-parallel/window-parallel approach where a correlation function based on the sum of absolute difference (SAD) computes the optical flow is proposed. Further, in order to improve the SAD, the curl of the intensity gradient as a preprocessing step is proposed. Experimental results demonstrated that it is possible to reach higher accuracy (90% of accuracy) compared with previous Field Programmable Gate Array (FPGA)-based optical flow algorithms. For the depth estimation, our algorithm delivers dense maps with motion and depth information on all image pixels, with a processing speed up to 128 times faster than that of previous work, making it possible to achieve high performance in the context of embedded applications.
引用
收藏
页数:20
相关论文
共 48 条
[21]   Vision meets robotics: The KITTI dataset [J].
Geiger, A. ;
Lenz, P. ;
Stiller, C. ;
Urtasun, R. .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11) :1231-1237
[22]  
Godard C., 2018, ARXIV PREPRINT ARXIV
[23]   W4:: Real-time surveillance of people and their activities [J].
Haritaoglu, I ;
Harwood, D ;
Davis, LS .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2000, 22 (08) :809-830
[24]  
Hengstler S, 2007, PROCEEDINGS OF THE SIXTH INTERNATIONAL SYMPOSIUM ON INFORMATION PROCESSING IN SENSOR NETWORKS, P360, DOI 10.1109/IPSN.2007.4379696
[25]  
Honegger D, 2012, IEEE INT C INT ROBOT, P5177, DOI 10.1109/IROS.2012.6385530
[26]   DETERMINING OPTICAL-FLOW [J].
HORN, BKP ;
SCHUNCK, BG .
ARTIFICIAL INTELLIGENCE, 1981, 17 (1-3) :185-203
[27]  
Ilg E., 2017, P IEEE C COMP VIS PA, V2, P6
[28]  
Khaleghi B, 2007, CAN CON EL COMP EN, P1476
[29]   Bio-inspired motion detection in an FPGA-based smart camera module [J].
Koehler, T. ;
Roechter, F. ;
Lindemann, J. P. ;
Moeller, R. .
BIOINSPIRATION & BIOMIMETICS, 2009, 4 (01)
[30]   Monocular depth estimation with hierarchical fusion of dilated CNNs and soft-weighted-sum inference [J].
Li, Bo ;
Dai, Yuchao ;
He, Mingyi .
PATTERN RECOGNITION, 2018, 83 :328-339