Infrared and visible light image fusion based on internal generative mechanism and convolution sparse representation

被引:0
作者
Feng X. [1 ,2 ]
机构
[1] College of Mechanical Engineering, Chongqing Technology and Business University, Chongqing
[2] Key Laboratory of Manufacturing Equipment Mechanism Design and Control of Chongqing, Chongqing Technology and Business University, Chongqing
来源
Kongzhi yu Juece/Control and Decision | 2021年 / 37卷 / 01期
关键词
Convolution sparse representation; Image fusion; Infrared and visible light; Internal generative mechanism; ISR operator;
D O I
10.13195/j.kzyjc.2020.1080
中图分类号
学科分类号
摘要
In order to improve the visual effect of infrared and visible light image fusion and overcome the artifact effect of the fusion result, an image fusion method based on the internal generative mechanism and convolution sparse representation is proposed. Firstly, the source image is decomposed using the internal generative mechanism that conforms to the reasoning of the human brain to obtain the prediction layer and the detail layer. Then, the detail layer is decomposed using a convolution sparse representation to obtain the secondary detail layer and the basic layer, and the activity level measurement is made to be larger and the weighted average rule is fused separately. The ISR hybrid operator fusion rule is defined for the prediction layer. Finally, the fusion prediction layer and detail layer are added to obtain the final fusion result. In the experiment, three representative infrared and visible light images are used for algorithm testing. The experiment results show that the proposed method has good subjective visual effects, and the objective evaluation indicators are also better and effective. © 2022, Editorial Office of Control and Decision. All right reserved.
引用
收藏
页码:167 / 174
页数:7
相关论文
共 33 条
[1]  
Shen Y, Chen X P, Liu C., Infrared and visible image fusion based on hybrid model driving, Control and Decision, 36, 9, pp. 2143-2151, (2021)
[2]  
Luo X Q, Xiong M Y, Zhang Z C., Multi-focus image fusion method based on joint convolution auto-encoder network, Control and Decision, 35, 7, pp. 1651-1658, (2020)
[3]  
Ma Jia-yi, Ma Yong, Li Chang, Infrared and visible image fusion methods and applications: A survey, Information Fusion, 45, pp. 153-178, (2019)
[4]  
Li S T, Kang X D, Fang L Y., Pixel-level image fusion: A survey of the state of the art, Information Fusion, 33, 1, pp. 100-112, (2017)
[5]  
An Y, Fan X L, Chen L., Image fusioncombining FABEMD with improved saliency detection, Systems Engineering and Electronics, 42, 2, pp. 292-300, (2020)
[6]  
Liang Y X, Mao Y, Xia J Z, Et al., Scale-invariant structure saliency selection for fast image fusion, Neurocomputing, 356, 9, pp. 119-130, (2019)
[7]  
Meher B, Agrawal S, Panda R., A survey on region based image fusion methods, Information Fusion, 48, pp. 119-132, (2019)
[8]  
Xiang T Z, Gao R R, Yan L, Et al., Region feature based multi-scale fusion method for thermal infrared and visible images, Geomatics and Information Science of Wuhan University, 42, 7, pp. 911-917, (2017)
[9]  
Zhou X L, Jiang Z T., Infrared and visible image fusion combining pulse-coupled neural network and guided filtering, Acta Optica Sinica, 39, 11, pp. 132-139, (2019)
[10]  
Zhu H R, Liu Y Q, Zhang W Y., Infrared and visible image fusion based on filtering and multi-visual weight information, Acta PhotonicaSinica, 48, 3, pp. 190-200, (2019)