Deep learning for pixel-level image fusion: Recent advances and future prospects

被引:547
作者
Liu, Yu [1 ]
Chen, Xun [1 ,2 ]
Wang, Zengfu [3 ]
Wang, Z. Jane [4 ]
Ward, Rabab K. [4 ]
Wang, Xuesong [5 ]
机构
[1] Hefei Univ Technol, Dept Biomed Engn, Hefei 230009, Anhui, Peoples R China
[2] Univ Sci & Technol China, Dept Elect Sci & Technol, Hefei 230026, Anhui, Peoples R China
[3] Univ Sci & Technol China, Dept Automat, Hefei 230026, Anhui, Peoples R China
[4] Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC, Canada
[5] China Univ Min & Technol, Sch Informat & Control Engn, Xuzhou 221116, Peoples R China
基金
中国国家自然科学基金;
关键词
Image fusion; Deep learning; Convolutional neural network; Convolutional sparse representation; Stacked autoencoder; PAN-SHARPENING METHOD; MULTI-FOCUS IMAGES; SPARSE REPRESENTATION; INFORMATION MEASURE; SPATIAL-FREQUENCY; PERFORMANCE; QUALITY; SUPERRESOLUTION; DECOMPOSITION; SEGMENTATION;
D O I
10.1016/j.inffus.2017.10.007
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
By integrating the information contained in multiple images of the same scene into one composite image, pixel-level image fusion is recognized as having high significance in a variety of fields including medical imaging, digital photography, remote sensing, video surveillance, etc. In recent years, deep learning (DL) has achieved great success in a number of computer vision and image processing problems. The application of DL techniques in the field of pixel-level image fusion has also emerged as an active topic in the last three years. This survey paper presents a systematic review of the DL-based pixel-level image fusion literature. Specifically, we first summarize the main difficulties that exist in conventional image fusion research and discuss the advantages that DL can offer to address each of these problems. Then, the recent achievements in DL-based image fusion are reviewed in detail. More than a dozen recently proposed image fusion methods based on DL techniques including convolutional neural networks (CNNs), convolutional sparse representation (CSR) and stacked autoencoders (SAEs) are introduced. At last, by summarizing the existing DL-based image fusion methods into several generic frameworks and presenting a potential DL-based framework for developing objective evaluation metrics, we put forward some prospects for the future study on this topic. The key issues and challenges that exist in each framework are discussed.
引用
收藏
页码:158 / 173
页数:16
相关论文
共 132 条
[41]   MAP Estimation for Multiresolution Fusion in Remotely Sensed Images Using an IGMRF Prior Model [J].
Joshi, Manjunath ;
Jalobeanu, Andre .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2010, 48 (03) :1245-1255
[42]   Deep High Dynamic Range Imaging of Dynamic Scenes [J].
Kalantari, Nima Khademi ;
Ramamoorthi, Ravi .
ACM TRANSACTIONS ON GRAPHICS, 2017, 36 (04)
[43]  
Kim J, 2016, PROC CVPR IEEE, P1637, DOI [10.1109/CVPR.2016.182, 10.1109/CVPR.2016.181]
[44]   Joint patch clustering-based dictionary learning for multimodal image fusion [J].
Kim, Minjae ;
Han, David K. ;
Ko, Hanseok .
INFORMATION FUSION, 2016, 27 :198-214
[45]  
Kong J, 2008, INT J COMPUT SCI NET, V8, P220
[46]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[47]   Gradient-based learning applied to document recognition [J].
Lecun, Y ;
Bottou, L ;
Bengio, Y ;
Haffner, P .
PROCEEDINGS OF THE IEEE, 1998, 86 (11) :2278-2324
[48]   Pixel- and region-based image fusion with complex wavelets [J].
Lewis, John J. ;
O'Callaghan, Robert J. ;
Nikolov, Stavri G. ;
Bull, David R. ;
Canagarajah, Nishan .
INFORMATION FUSION, 2007, 8 (02) :119-130
[49]   Video Superresolution via Motion Compensation and Deep Residual Learning [J].
Li, Dingyi ;
Wang, Zengfu .
IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, 2017, 3 (04) :749-762
[50]   MULTISENSOR IMAGE FUSION USING THE WAVELET TRANSFORM [J].
LI, H ;
MANJUNATH, BS ;
MITRA, SK .
GRAPHICAL MODELS AND IMAGE PROCESSING, 1995, 57 (03) :235-245