Multi-exposure image fusion using structural weights and visual saliency map

被引:0
作者
Tirumala Vasu G. [1 ]
Palanisamy P. [1 ]
机构
[1] Communications Engineering, National Institute of Technology, Tamil Nadu, Tiruchirapalli
[2] Presidency University, Bangalore
关键词
Fusion quality metrics; Rolling guided filter; Structural patch decomposition; Visual saliency map;
D O I
10.1007/s11042-024-19355-w
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Multi Exposure Image Fusion (MEIF) represents a procedure for combining multiple images with various exposure levels into a single image for good visual perception. The traditional techniques often suffer from spatial inconsistency, visual distortion, noisy weights maps, losing of vivid colour information. To addresses these issues, in this article we proposed a MEIF using structural weights and a visual saliency map. Source images are decomposed into contrast, structure and intensity features to find the its detail layers. To preserve the edge information for better spatial consistent structures, base layers of source images will be generated through Rolling Guided Filter (RGF). To retain vivid colours and avoid visual distortion we used saliency maps of source images. A weight map generator compares the base layers and saliency maps in order to avoid noisy weight maps. Finally fused image will be generated through fused base and detail layers. The effectiveness of the proposed MEIF method has been evaluated both objectively and subjectively, and the results show that it is superior to a subset of already available solutions. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
引用
收藏
页码:9865 / 9880
页数:15
相关论文
共 49 条
[1]  
Ma K., Li H., Yong H., Wang Z., Meng D., Zhang L., Robust multiexposure image fusion: A structural patch decomposition approach, IEEE Trans Image Process, 26, 5, pp. 2519-2532, (2017)
[2]  
Shen J., Zhao Y., Yan S., Li X., Exposure fusion using boosting Laplacian pyramid, IEEE Trans Cybern, 44, 9, pp. 1579-1590, (2014)
[3]  
Li H., Yang Z., Zhang Y., Tao D., Zhengtao Y., Single-Image HDR Reconstruction Assisted Ghost Suppression and Detail Preservation Network for Multi-Exposure HDR Imaging, IEEE Trans Comput Imaging, 10, pp. 429-445, (2024)
[4]  
Zhang Z., Wang H., Liu S., Wang X., Lei L., Zuo W., Self-supervised high dynamic range imaging with multi-exposure images in dynamic scenes, in arXiv preprint arXiv, 2310, (2023)
[5]  
Liu Z., Wang Y., Zeng B., Liu S., Ghost-Free High Dynamic Range Imaging with Context-Aware Transformer, in European Conference on Computer Vision 2022, ECCV 2022, (2022)
[6]  
Chen H., Ren Y., Cao J., Liu W., Liu K., Multi-exposure fusion for welding region based on multi-scale transform and hybrid weight, Int J Adv Manuf Technol, 101, pp. 105-117, (2019)
[7]  
Yuan L., Wenbo W., Dong S., He Q., Zhang F., A High Dynamic Range Image Fusion Method Based on Dual Gain Image, Int J Image Data Fusion, 14, 1, pp. 15-37, (2023)
[8]  
Krishnamoorthy S., Punithavathani S., Priya J.K., Extraction of well-exposed pixels for image fusion with a sub-banding technique for high dynamic range images, Int J Image Data Fusion, 8, 1, pp. 54-72, (2017)
[9]  
TirumalaVasu G., Palanisamy P., Gradient-based multi-focus image fusion using foreground and background pattern recognition with weighted anisotropic diffusion filter, Signal Image Video Process, 17, pp. 2531-2543, (2023)
[10]  
TirumalaVasu G., Palanisamy P., Multi-focus image fusion using anisotropic diffusion filter, Soft Comput, 26, 24, pp. 14029-14040, (2022)