Velocity estimation from monocular video for automotive applications using convolutional neural networks

被引:0
作者
Banerjee, Koyel [1 ]
Tuan Van Dinh [1 ]
Levkova, Ludmila [2 ]
机构
[1] BMW Grp Technol Off, 2606 Bayshore Pkwy, Mountain View, CA 94043 USA
[2] Nauto Inc, 380 Portage Ave, Palo Alto, CA 94306 USA
来源
2017 28TH IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV 2017) | 2017年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We aim to determine the speed of ego-vehicle motion from a video stream. Previous work by Konda et al. [1] has shown that motion can be detected and quantified with the help of a synchrony autoencoder, which has multiplicative gating interactions introduced between its hidden units, and hence, across video frames. In this work we modify their synchrony autoencoder method to achieve a "real time" performance in a wide variety of driving environments. Our modifications led to a model which is 1.5 times faster and uses only half of the total memory by comparison with the original. We also benchmark the speed estimation performance against a model based on CaffeNet. CaffeNet is known for visual classification and localization but we employ its architecture with a little tweak for speed determination using sequential video frames and blur patterns. We evaluate our models on self-collected data, KITTI, and other standard sets.
引用
收藏
页码:373 / 378
页数:6
相关论文
共 29 条