Fusion of infrared and visible images via multi-layer convolutional sparse representation

被引:3
作者
Zhang, Zhouyu [1 ,2 ,6 ,7 ]
He, Chenyuan [1 ]
Wang, Hai [1 ]
Cai, Yingfeng [3 ]
Chen, Long [3 ]
Gan, Zhihua [4 ]
Huang, Fenghua [2 ,4 ]
Zhang, Yiqun [5 ]
机构
[1] Jiangsu Univ, Synergist Innovat Ctr Modern Agr Equipment, Sch Automot & Traff Engn, Jiangsu Prov & Educ Minist, Xuefu Rd 301, Zhenjiang 212013, Peoples R China
[2] Yango Univ, Fujian Key Lab Spatial Informat Percept & Intellig, Denglong Rd 99, Fuzhou 350015, Peoples R China
[3] Jiangsu Univ, Automot Engn Res Inst, Xuefu Rd 301, Zhenjiang 212013, Peoples R China
[4] Zhejiang Univ, Coll Energy Engn, 38 Zheda Rd, Hangzhou 310027, Peoples R China
[5] TopXGun Nanjing Robot Co Ltd, Dongji Ave 1, Nanjing 211153, Peoples R China
[6] AnHui Polytech Univ, AnHui Key Lab Detect Technol & Energy Saving Devic, Beijing Middle Rd 8, Wuhu 10363, Peoples R China
[7] Xihua Univ, Vehicle Measurement Control & Safety Key Lab Sichu, Jinzhou Rd 999, Chengdu 610039, Peoples R China
关键词
Image fusion; Infrared and visible image; Convolutional sparse representation(CSR); Unmanned aerial vehicle (UAV);
D O I
10.1016/j.jksuci.2024.102090
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Infrared and visible image fusion is an effective solution for image quality enhancement. However, conventional fusion models require the decomposition of source images into image blocks, which disrupts the original structure of the images, leading to the loss of detail in the fused images and making the fusion results highly sensitive to matching errors. This paper employs Convolutional Sparse Representation (CSR) to perform global feature transformation on the source images, overcoming the drawbacks of traditional fusion models that rely on image decomposition. Inspired by neural networks, a multi -layer CSR model is proposed, which involves five layers in a forward -feeding manner: two CSR layers acquiring sparse coefficient maps, one fusion layer combining sparse maps, and two reconstruction layers for image recovery. The dataset used in this paper comprises infrared and visible images selected from public dataset, as well as registered images collected by an actual Unmanned Aerial Vehicle (UAV). The source images contain ground targets, marine targets, and natural landscapes. To validate the effectiveness of the proposed image fusion model in this paper, comparative analysis is conducted with state-of-the-art (SOTA) algorithms. Experimental results demonstrate that the proposed fusion model outperforms other state-of-the-art methods by at least 10% in SF, EN, MI and Q A B / F fusion metrics in most image fusion cases, thereby affirming its favorable performance.
引用
收藏
页数:15
相关论文
共 50 条
[41]   Infrared and Visible Image Fusion Method Based on Rolling Guidance Filter and Convolution Sparse Representation [J].
Pei Peipei ;
Yang Yanchun ;
Dang Jianwu ;
Wang Yangping .
LASER & OPTOELECTRONICS PROGRESS, 2022, 59 (12)
[42]   Latent low-rank representation with sparse consistency constraint for infrared and visible image fusion [J].
Tao, Tiwei ;
Liu, Ming-Xia ;
Hou, Yingkun ;
Wang, Pengfei ;
Yang, Deyun ;
Zhang, Qiang .
OPTIK, 2022, 261
[43]   An anti-noise fusion method for the infrared and the visible image based upon sparse representation [J].
He, Guiqing ;
Wei, Yijing .
2017 INTERNATIONAL CONFERENCE ON MACHINE VISION AND INFORMATION TECHNOLOGY (CMVIT), 2017, :12-17
[44]   Infrared and Visible Image Fusion Using Visual Saliency Sparse Representation and Detail Injection Model [J].
Yang, Yong ;
Zhang, Yingmei ;
Huang, Shuying ;
Zuo, Yifan ;
Sun, Jiancheng .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70 (70)
[45]   MPCFusion: Multi-scale parallel cross fusion for infrared and visible images via convolution and vision Transformer [J].
Tang, Haojie ;
Qian, Yao ;
Xing, Mengliang ;
Cao, Yisheng ;
Liu, Gang .
OPTICS AND LASERS IN ENGINEERING, 2024, 176
[46]   Latent Representation Learning Model for Multi-Band Images Fusion via Low-Rank and Sparse Embedding [J].
Wang, Bin ;
Niu, Huifang ;
Zeng, Jianchao ;
Bai, Guifeng ;
Lin, Suzhen ;
Wang, Yanbo .
IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 :3137-3152
[47]   Fusion of infrared and visible images via structure and texture-aware retinex [J].
Hu J. ;
Hao M. ;
Du Y. ;
Xie Q. .
Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 2022, 30 (24) :3225-3228
[48]   MIFFuse: A Multi-Level Feature Fusion Network for Infrared and Visible Images [J].
Zhu D. ;
Zhan W. ;
Jiang Y. ;
Xu X. ;
Guo R. .
IEEE Access, 2021, 9 :130778-130792
[49]   Multi-window visual saliency extraction for fusion of visible and infrared images [J].
Zhao, Jufeng ;
Gao, Xiumin ;
Chen, Yueting ;
Feng, Huajun ;
Wang, Daodang .
INFRARED PHYSICS & TECHNOLOGY, 2016, 76 :295-302
[50]   Object Fusion Tracking Based on Visible and Infrared Images Using Fully Convolutional Siamese Networks [J].
Zhang, Xingchen ;
Ye, Ping ;
Qiao, Dan ;
Zhao, Junhao ;
Peng, Shengyun ;
Xiao, Gang .
2019 22ND INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION 2019), 2019,