Dark Photo Reconstruction by Event Camera

被引:1
作者
Zhe, Jiang [1 ]
机构
[1] Sichuan Univ, Chengdu, Peoples R China
来源
2019 INTERNATIONAL CONFERENCE ON VIRTUAL REALITY AND VISUALIZATION (ICVRV) | 2019年
关键词
Event Camera; Neural Networks; Dark image reconstruction;
D O I
10.1109/ICVRV47840.2019.00027
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Imaging in low light environment is a challenging problem due to the limiting number of the photons gathered by the traditional camera. To tackle the problem, this paper introduces a novel deep learning method for intensity images reconstruction in low light with event cameras. Event cameras are biologically-inspired sensors that capture brightness changes in the form of asynchronous "events" instead of intensity frames. They have significant advantages over conventional cameras: high temporal resolution, high dynamic range, and no motion blur. This method exploits the high dynamic range characteristic of event cameras, and bridges the gap between the intensity image and event data stream. The main challenges in our approach are that it is very hard to build paired low/high exposure event/intensity data for training and event data captured in dark are very noisy and sparse. Even if we have paired event-intensity data captured in the daytime, the models trained on it cannot generalize well in low light condition. In this paper, we combine paired and unpaired data and propose a novel GAN-based hybrid learning framework to get over the difficulty and improve the quality of reconstructed images. Experimental results on both synthetic and real data demonstrate the superiority of our model method in comparison to the state-of-the-art.
引用
收藏
页码:113 / 117
页数:5
相关论文
共 27 条
  • [1] Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
  • [2] Simultaneous Optical Flow and Intensity Estimation from an Event Camera
    Bardow, Patrick
    Davison, Andrew J.
    Leutenegger, Stefan
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 884 - 892
  • [3] Barua S., 2016, IEEE Winter Conf. Appl. Comput. Vis. (WACV), P1, DOI DOI 10.1109/WACV.2016.7477561
  • [4] A 240 x 180 130 dB 3 μs Latency Global Shutter Spatiotemporal Vision Sensor
    Brandli, Christian
    Berner, Raphael
    Yang, Minhao
    Liu, Shih-Chii
    Delbruck, Tobi
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2014, 49 (10) : 2333 - 2341
  • [5] Cook M, 2011, 2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), P770, DOI 10.1109/IJCNN.2011.6033299
  • [6] Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672, DOI DOI 10.1145/3422622
  • [7] Image-to-Image Translation with Conditional Adversarial Networks
    Isola, Phillip
    Zhu, Jun-Yan
    Zhou, Tinghui
    Efros, Alexei A.
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 5967 - 5976
  • [8] Kim H., 2008, J. Solid State Circuits, V43, P566, DOI DOI 10.5244/C.28.26
  • [9] Kim T, 2017, PR MACH LEARN RES, V70
  • [10] King DB, 2015, ACS SYM SER, V1214, P1