MLNet: A Multi-Domain Lightweight Network for Multi-Focus Image Fusion

被引:9
作者
Nie, Xixi [1 ]
Hu, Bo [1 ]
Gao, Xinbo [1 ]
机构
[1] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Image Cognit, Chongqing 400065, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning; multi-focus image fusion; discrete cosine transform; local binary battern; PERFORMANCE; TRANSFORM; GAN;
D O I
10.1109/TMM.2022.3194991
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Existing multi-focus image fusion (MFIF) methods are difficult to achieve satisfactory results in both fusion performance and rate simultaneously. The spatial domain methods are hard to determine the focus/defocus boundary (FDB), and the transform domain methods are likely to damage the content information of the source images. Moreover, the deep learning-based MFIF methods are usually confronted with low rate due to complex models and enormous learnable parameters. To address these issues, we propose a multi-domain lightweight network (MLNet) for MFIF, which can achieve competitive results in both performance and rate. The proposed MLNet mainly includes three modules, namely focus extraction (FE), focus measure (FM) and image fusion (IF). In the interpretable FE module, the image features extracted by discrete cosine transform-based convolution (DCTConv) and local binary pattern-based convolution (LBPConv) are concatenated and fed into the FM module. DCTConv based on transform domain takes DCT coefficients to construct a fixed convolution kernel without parameter learning, which can effectively capture the high/low frequency content of the image. LBPConv based on spatial domain can achieve structure features and gradient information from source images. In the FM module, a 3-layer 1 x 1 convolution with a few learnable parameters is employed to generate the initial decision map, which has the properties of flexible input. The fused image is obtained by the IF module according to the final decision map. In terms of quantitative and qualitative evaluations, extensive experiments validate that the proposed method outperforms existing state-of-the-art methods on three public datasets. In addition, the proposed MLNet contains only 0.01 M parameters, which is 0.2% of the first CNN-based MFIF method [25].
引用
收藏
页码:5565 / 5579
页数:15
相关论文
共 48 条
[1]  
Amin-Naji M., 2018, J AI DATA MINING, V6, P233, DOI DOI 10.22044/JADM.2017.5169.1624
[2]   A Novel Micro-Multifocus X-Ray Source Based on Electron Beam Scanning for Multi-View Stationary Micro Computed Tomography [J].
An, Kang ;
Yin, Yifan ;
Li, Fukun ;
Shi, Limin ;
Hu, Xiaolong ;
Tang, Jie ;
Zhou, Rifeng .
IEEE ELECTRON DEVICE LETTERS, 2020, 41 (01) :167-170
[3]   Directive Contrast Based Multimodal Medical Image Fusion in NSCT Domain [J].
Bhatnagar, Gaurav ;
Wu, Q. M. Jonathan ;
Liu, Zheng .
IEEE TRANSACTIONS ON MULTIMEDIA, 2013, 15 (05) :1014-1024
[4]   Multi-Focus Image Fusion Based on Spatial Frequency in Discrete Cosine Transform Domain [J].
Cao, Liu ;
Jin, Longxu ;
Tao, Hongjiang ;
Li, Guoning ;
Zhuang, Zhuang ;
Zhang, Yanfu .
IEEE SIGNAL PROCESSING LETTERS, 2015, 22 (02) :220-224
[5]   A new automated quality assessment algorithm for image fusion [J].
Chen, Yin ;
Blum, Rick S. .
IMAGE AND VISION COMPUTING, 2009, 27 (10) :1421-1432
[6]   Sparse directional image representations using the discrete shearlet transform [J].
Easley, Glenn ;
Labate, Demetrio ;
Lim, Wang-Q .
APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2008, 25 (01) :25-46
[7]   FuseGAN: Learning to Fuse Multi-Focus Image via Conditional Generative Adversarial Network [J].
Guo, Xiaopeng ;
Nie, Rencan ;
Cao, Jinde ;
Zhou, Dongming ;
Mei, Liye ;
He, Kangjian .
IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (08) :1982-1996
[8]   Multi-focus image fusion for visual sensor networks in DCT domain [J].
Haghighat, Mohammad Bagher Akbari ;
Aghagolzadeh, Ali ;
Seyedarabi, Nadi .
COMPUTERS & ELECTRICAL ENGINEERING, 2011, 37 (05) :789-797
[9]   Multi-focus: Focused region finding and multi-scale transform for image fusion [J].
He, Kangjian ;
Zhou, Dongming ;
Zhang, Xuejie ;
Nie, Rencan .
NEUROCOMPUTING, 2018, 320 :157-170
[10]  
Hong X., 2019, P IEEE INT C WAV AN, P1