U2Fusion: A Unified Unsupervised Image Fusion Network

被引:1247
作者
Xu, Han [1 ]
Ma, Jiayi [1 ]
Jiang, Junjun [2 ]
Guo, Xiaojie [3 ]
Ling, Haibin [4 ]
机构
[1] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Peoples R China
[2] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin 150001, Peoples R China
[3] Tianjin Univ, Coll Intelligence & Comp, Tianjin 300350, Peoples R China
[4] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
基金
中国国家自然科学基金;
关键词
Image fusion; unified model; unsupervised learning; continual learning; MULTI-FOCUS IMAGE; FRAMEWORK;
D O I
10.1109/TPAMI.2020.3012548
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This study proposes a novel unified and unsupervised end-to-end image fusion network, termed as U2Fusion, which is capable of solving different fusion problems, including multi-modal, multi-exposure, and multi-focus cases. Using feature extraction and information measurement, U2Fusion automatically estimates the importance of corresponding source images and comes up with adaptive information preservation degrees. Hence, different fusion tasks are unified in the same framework. Based on the adaptive degrees, a network is trained to preserve the adaptive similarity between the fusion result and source images. Therefore, the stumbling blocks in applying deep learning for image fusion, e.g., the requirement of ground-truth and specifically designed metrics, are greatly mitigated. By avoiding the loss of previous fusion capabilities when training a single model for different tasks sequentially, we obtain a unified model that is applicable to multiple fusion tasks. Moreover, a new aligned infrared and visible image dataset, RoadScene (available at https://github.com/hanna-xu/RoadScene), is released to provide a new option for benchmark evaluation. Qualitative and quantitative experimental results on three typical image fusion tasks validate the effectiveness and universality of U2Fusion. Our code is publicly available at https://github.com/hanna-xu/U2Fusion.
引用
收藏
页码:502 / 518
页数:17
相关论文
共 50 条
[1]   A new image quality metric for image fusion: The sum of the correlations of differences [J].
Aslantas, V. ;
Bendes, E. .
AEU-INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATIONS, 2015, 69 (12) :160-166
[2]   Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images [J].
Cai, Jianrui ;
Gu, Shuhang ;
Zhang, Lei .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (04) :2049-2062
[3]   Multi-Focus Image Fusion Based on Spatial Frequency in Discrete Cosine Transform Domain [J].
Cao, Liu ;
Jin, Longxu ;
Tao, Hongjiang ;
Li, Guoning ;
Zhuang, Zhuang ;
Zhang, Yanfu .
IEEE SIGNAL PROCESSING LETTERS, 2015, 22 (02) :220-224
[4]   A Neuro-Fuzzy Approach for Medical Image Fusion [J].
Das, Sudeb ;
Kundu, Malay Kumar .
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2013, 60 (12) :3347-3353
[5]  
Eichel J. A., 2015, ARXIV151002055
[6]   FuseGAN: Learning to Fuse Multi-Focus Image via Conditional Generative Adversarial Network [J].
Guo, Xiaopeng ;
Nie, Rencan ;
Cao, Jinde ;
Zhou, Dongming ;
Mei, Liye ;
He, Kangjian .
IEEE TRANSACTIONS ON MULTIMEDIA, 2019, 21 (08) :1982-1996
[7]   A new image fusion performance metric based on visual information fidelity [J].
Han, Yu ;
Cai, Yunze ;
Cao, Yin ;
Xu, Xiaoming .
INFORMATION FUSION, 2013, 14 (02) :127-135
[8]   Densely Connected Convolutional Networks [J].
Huang, Gao ;
Liu, Zhuang ;
van der Maaten, Laurens ;
Weinberger, Kilian Q. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2261-2269
[9]   Perceptual Losses for Real-Time Style Transfer and Super-Resolution [J].
Johnson, Justin ;
Alahi, Alexandre ;
Li Fei-Fei .
COMPUTER VISION - ECCV 2016, PT II, 2016, 9906 :694-711
[10]   Overcoming catastrophic forgetting in neural networks [J].
Kirkpatricka, James ;
Pascanu, Razvan ;
Rabinowitz, Neil ;
Veness, Joel ;
Desjardins, Guillaume ;
Rusu, Andrei A. ;
Milan, Kieran ;
Quan, John ;
Ramalho, Tiago ;
Grabska-Barwinska, Agnieszka ;
Hassabis, Demis ;
Clopath, Claudia ;
Kumaran, Dharshan ;
Hadsell, Raia .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2017, 114 (13) :3521-3526