Learning Dense and Continuous Optical Flow From an Event Camera

被引:29
作者
Wan, Zhexiong [1 ]
Dai, Yuchao [1 ]
Mao, Yuxin [1 ]
机构
[1] Northwestern Polytech Univ, Sch Elect & Informat, Xian 710129, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Event camera; event-based vision; optical flow estimation; multimodal learning;
D O I
10.1109/TIP.2022.3220938
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Event cameras such as DAVIS can simultaneously output high temporal resolution events and low frame-rate intensity images, which own great potential in capturing scene motion, such as optical flow estimation. Most of the existing optical flow estimation methods are based on two consecutive image frames and can only estimate discrete flow at a fixed time interval. Previous work has shown that continuous flow estimation can be achieved by changing the quantities or time intervals of events. However, they are difficult to estimate reliable dense flow, especially in the regions without any triggered events. In this paper, we propose a novel deep learning-based dense and continuous optical flow estimation framework from a single image with event streams, which facilitates the accurate perception of high-speed motion. Specifically, we first propose an event-image fusion and correlation module to effectively exploit the internal motion from two different modalities of data. Then we propose an iterative update network structure with bidirectional training for optical flow prediction. Therefore, our model can estimate reliable dense flow as two-frame-based methods, as well as estimate temporal continuous flow as event-based methods. Extensive experimental results on both synthetic and real captured datasets demonstrate that our model outperforms existing event-based state-of-the-art methods and our designed baselines for accurate dense and continuous optical flow estimation.
引用
收藏
页码:7237 / 7251
页数:15
相关论文
共 66 条
  • [1] Simultaneous Optical Flow and Intensity Estimation from an Event Camera
    Bardow, Patrick
    Davison, Andrew J.
    Leutenegger, Stefan
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 884 - 892
  • [2] Asynchronous frameless event-based optical flow
    Benosman, Ryad
    Ieng, Sio-Hoi
    Clercq, Charles
    Bartolozzi, Chiara
    Srinivasan, Mandyam
    [J]. NEURAL NETWORKS, 2012, 27 : 32 - 37
  • [3] A 240 x 180 130 dB 3 μs Latency Global Shutter Spatiotemporal Vision Sensor
    Brandli, Christian
    Berner, Raphael
    Yang, Minhao
    Liu, Shih-Chii
    Delbruck, Tobi
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2014, 49 (10) : 2333 - 2341
  • [4] A Naturalistic Open Source Movie for Optical Flow Evaluation
    Butler, Daniel J.
    Wulff, Jonas
    Stanley, Garrett B.
    Black, Michael J.
    [J]. COMPUTER VISION - ECCV 2012, PT VI, 2012, 7577 : 611 - 625
  • [5] Cannici Marco, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12365), P136, DOI 10.1007/978-3-030-58565-5_9
  • [6] Chankyu Lee, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12374), P366, DOI 10.1007/978-3-030-58526-6_22
  • [7] Cho K., 2014, P C EMP METH NAT LAN, P1724
  • [8] Learning From Images: A Distillation Learning Framework for Event Cameras
    Deng, Yongjian
    Chen, Hao
    Chen, Huiying
    Li, Youfu
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 4919 - 4931
  • [9] Ding ZL, 2022, AAAI CONF ARTIF INTE, P525
  • [10] FlowNet: Learning Optical Flow with Convolutional Networks
    Dosovitskiy, Alexey
    Fischer, Philipp
    Ilg, Eddy
    Haeusser, Philip
    Hazirbas, Caner
    Golkov, Vladimir
    van der Smagt, Patrick
    Cremers, Daniel
    Brox, Thomas
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2758 - 2766