Multispectral non-line-of-sight imaging via deep fusion photography

被引:0
作者
Hao Liu [1 ]
Zhen Xu [2 ]
Yifan Wei [2 ]
Kai Han [1 ]
Xin Peng [2 ]
机构
[1] College of Advanced Interdisciplinary Studies, National University of Defense Technology, Changsha
[2] School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing
基金
中国国家自然科学基金;
关键词
Kolmogorov-Arnold network; learning-based imaging method; multispectral deep fusion; non-line-of-sight imaging; VIS-SWIR-LWIR multispectral imaging;
D O I
10.1007/s11432-024-4256-3
中图分类号
学科分类号
摘要
Passive non-line-of-sight (NLOS) imaging is a promising technique that extends visual perception to hidden objects around the corner, offering advantages such as low-cost, portability, and real-time. However, the low quality of current passive NLOS images remains a significant barrier to field application of NLOS targets imaging at long standoffs. This study introduces a multispectral NLOS imaging approach utilizing a deep fusion framework to reconstruct images from visible, short-wavelength infrared, and long-wavelength infrared raw data captured by portable devices. The nonlinear representation capabilities and learnable activation function of the Kolmogorov-Arnold network (KAN) are particularly suited to the inverse light field transmission model in NLOS imaging, enhancing the interpretability of the deep neural network. Experimental results demonstrate that this deep fusion photography method provides satisfied performance to image the occluded individuals despite the polynomial attenuation of effective signals with increasing distance between hidden objects and the relay wall. Notably, the passive NLOS experiments reveal successful imaging of hidden people at distance >5 m from the relay wall. Remarkably, even at distances three times greater than those in previous studies, quantitative metrics validate the superior performance of the proposed method in the task of passive NLOS imaging. © Science China Press 2025.
引用
收藏
相关论文
共 59 条
  • [41] Liu H., Wang P., He X., Et al., PI-NLOS: polarized infrared non-line-of-sight imaging, Opt Express, 31, pp. 44113-44126, (2023)
  • [42] Maeda T., Wang Y., Raskar R., Et al., Thermal non-line-of-sight imaging, Proceedings of IEEE International Conference on Computational Photography (ICCP), pp. 1-11, (2019)
  • [43] Divitt S., Gardner D.F., Watnik A.T., Passive, thermal, reference-free, non-line-of-sight imaging, Proceedings of Conference on Lasers and Electro-Optics (CLEO), (2020)
  • [44] Hashemi C., Sasaki T., Leger J., Parallax-driven denoising of passive non-line-of-sight thermal imagery, Proceedings of IEEE International Conference on Computational Photography (ICCP), pp. 1-12, (2023)
  • [45] Hashemi C., Avelar R., Leger J., Isolating signals in passive non-line-of-sight imaging using spectral content, IEEE Trans Pattern Anal Mach Intell, (2024)
  • [46] Chen M., Liu H., Jin S., Et al., Hyper-NLOS: hyperspectral passive non-line-of-sight imaging, Opt Express, 32, pp. 34807-34824, (2024)
  • [47] Baradad M., Ye V., Yedidia A.B., Et al., Inferring light fields from shadows, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6267-6275, (2018)
  • [48] Kaga M., Kushida T., Takatani T., Et al., Thermal non-line-of-sight imaging from specular and diffuse reflections, IPSJ T Comput Vis Appl, 11, pp. 1-6, (2019)
  • [49] Sasaki T., Hashemi C., Leger J.R., Passive 3D location estimation of non-line-of-sight objects from a scattered thermal infrared light field, Opt Express, 29, pp. 43642-43661, (2021)
  • [50] He J.H., Wu S.K., Wei R., Et al., Non-line-of-sight imaging and tracking of moving objects based on deep learning, Opt Express, 30, pp. 16758-16772, (2022)