Hybrid All-in-Focus Imaging From Neuromorphic Focal Stack

被引:0
作者
Teng, Minggui [1 ,2 ]
Lou, Hanyue [1 ,2 ]
Yang, Yixin [1 ,2 ]
Huang, Tiejun [1 ,2 ]
Shi, Boxin [1 ,2 ]
机构
[1] Peking Univ, State Key Lab Multimedia Informat Proc, Beijing 100871, Peoples R China
[2] Peking Univ, Sch Comp Sci, Natl Engn Res Ctr Visual Technol, Beijing 100871, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
Neuromorphics; Cameras; Image reconstruction; Imaging; Streams; Photography; Merging; All-in-focus imaging; hybrid camera system; neuromorphic camera; FIELD; DEPTH; CAMERAS;
D O I
10.1109/TPAMI.2024.3433607
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Creating an image focal stack requires multiple shots, which captures images at different depths within the same scene. Such methods are not suitable for scenes undergoing continuous changes. Achieving an all-in-focus image from a single shot poses significant challenges, due to the highly ill-posed nature of rectifying defocus and deblurring from a single image. In this paper, to restore an all-in-focus image, we introduce the neuromorphic focal stack, which is defined as neuromorphic signal streams captured by an event/ a spike camera during a continuous focal sweep, aiming to restore an all-in-focus image. Given an RGB image focused at any distance, we harness the high temporal resolution of neuromorphic signal streams. From neuromorphic signal streams, we automatically select refocusing timestamps and reconstruct corresponding refocused images to form a focal stack. Guided by the neuromorphic signal around the selected timestamps, we can merge the focal stack using proper weights and restore a sharp all-in-focus image. We test our method on two distinct neuromorphic cameras. Experimental results from both synthetic and real datasets demonstrate a marked improvement over existing State-of-the-Art methods.
引用
收藏
页码:10124 / 10137
页数:14
相关论文
共 62 条
[1]  
Abuolaim Abdullah, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12355), P111, DOI 10.1007/978-3-030-58607-2_7
[2]   Learning to Reduce Defocus Blur by Realistically Modeling Dual-Pixel Data [J].
Abuolaim, Abdullah ;
Delbracio, Mauricio ;
Kelly, Damien ;
Brown, Michael S. ;
Milanfar, Peyman .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :2269-2278
[3]   Interactive digital photomontage [J].
Agarwala, A ;
Dontcheva, M ;
Agrawala, M ;
Drucker, S ;
Colburn, A ;
Curless, B ;
Salesin, D ;
Cohen, M .
ACM TRANSACTIONS ON GRAPHICS, 2004, 23 (03) :294-302
[4]   Improving fast auto-focus with event polarity [J].
Bao, Yuhan ;
Sun, Lei ;
Ma, Yuqin ;
Gu, Diyang ;
Wang, Kaiwei .
OPTICS EXPRESS, 2023, 31 (15) :24025-24044
[5]  
Blender Foundation, The Blender project-free and open 3D creation software
[6]  
Cadena P. R. G., 2023, P COMP VIS PATT REC, P4150
[7]   Live Demonstration: CeleX-V: a 1M Pixel Multi-Mode Event-based Sensor [J].
Chen Shoushun ;
Guo Menghan .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, :1682-1683
[8]   Non-Parametric Blur Map Regression for Depth of Field Extension [J].
D'Andres, Laurent ;
Salvador, Jordi ;
Kochale, Axel ;
Suesstrunk, Sabine .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (04) :1660-1673
[9]   LiFF: Light Field Features in Scale and Depth [J].
Dansereau, Donald G. ;
Girod, Bernd ;
Wetzstein, Gordon .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :8034-8043
[10]   EXTENDED DEPTH OF FIELD THROUGH WAVE-FRONT CODING [J].
DOWSKI, ER ;
CATHEY, WT .
APPLIED OPTICS, 1995, 34 (11) :1859-1866