AEFusion: A multi-scale fusion network combining Axial attention and Entropy feature Aggregation for infrared and visible images

被引:13
作者
Li, Bicao [1 ,2 ]
Lu, Jiaxi [1 ]
Liu, Zhoufeng [1 ]
Shao, Zhuhong [3 ]
Li, Chunlei [1 ]
Du, Yifan [1 ]
Huang, Jie [1 ]
机构
[1] Zhongyuan Univ Technol, Zhengzhou, Peoples R China
[2] Zhengzhou Univ, Zhengzhou, Peoples R China
[3] Capital Normal Univ, Beijing, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Image fusion; Axial attention; Entropy features; Infrared and visible images; GENERATIVE ADVERSARIAL NETWORK; PERFORMANCE; EXTRACTION; NEST; GAN;
D O I
10.1016/j.asoc.2022.109857
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The purpose of image fusion is to generate an image that contains more complementary information. Existing image fusion methods suffer from loss of detail information, artifacts and/or inconsistencies. To alleviate these problems, we propose a feature extraction network combining with Axial-attention, which can capture long-range semantic information while extracting multi-scale features and thus has stronger feature representation capabilities. Likewise, existing fusion strategies also suffer from loss of details. To solve this problem, a new fusion strategy is proposed, where a novel attention mechanism is constructed by applying entropy features to aggregate edge and detail features. At the same time, a new loss function is designed to constrain the network. To validate the efficiency of the proposed method, validation experiments are performed on public datasets. Compared with other fusion methods, the experimental results of the proposed method demonstrate state-of-the-art advantages in both subjective and objective evaluations. Furthermore, ablation studies illustrate the superiority of the proposed method. (c) 2022 Published by Elsevier B.V.
引用
收藏
页数:16
相关论文
共 85 条
  • [1] Bavirisetti DP, 2017, 2017 20TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), P701
  • [2] Two-scale image fusion of visible and infrared images using saliency detection
    Bavirisetti, Durga Prasad
    Dhuli, Ravindra
    [J]. INFRARED PHYSICS & TECHNOLOGY, 2016, 76 : 52 - 64
  • [3] Fusion of Infrared and Visible Sensor Images Based on Anisotropic Diffusion and Karhunen-Loeve Transform
    Bavirisetti, Durga Prasad
    Dhuli, Ravindra
    [J]. IEEE SENSORS JOURNAL, 2016, 16 (01) : 203 - 209
  • [4] Multi-focus Image Fusion using Neutrosophic based Wavelet Transform
    Bhat, Shiveta
    Koundal, Deepika
    [J]. APPLIED SOFT COMPUTING, 2021, 106
  • [5] Bijelic M., 2019, SEEING FOG SEEING FO, DOI [10.1109/CVPR42600.2020.01170, DOI 10.1109/CVPR42600.2020.01170]
  • [6] Medical image fusion via discrete stationary wavelet transform and an enhanced radial basis function neural network
    Chao, Zhen
    Duan, Xingguang
    Jia, Shuangfu
    Guo, Xuejun
    Liu, Hao
    Jia, Fucang
    [J]. APPLIED SOFT COMPUTING, 2022, 118
  • [7] Chen J, 2021, arXiv
  • [8] Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition
    Cui, Guangmang
    Feng, Huajun
    Xu, Zhihai
    Li, Qi
    Chen, Yueting
    [J]. OPTICS COMMUNICATIONS, 2015, 341 : 199 - 209
  • [9] CMFA_Net: A cross-modal feature aggregation network for infrared-visible image fusion
    Ding, Zhaisheng
    Li, Haiyan
    Zhou, Dongming
    Li, Hongsong
    Liu, Yanyu
    Hou, Ruichao
    [J]. INFRARED PHYSICS & TECHNOLOGY, 2021, 118
  • [10] Image quality measures and their performance
    Eskicioglu, AM
    Fisher, PS
    [J]. IEEE TRANSACTIONS ON COMMUNICATIONS, 1995, 43 (12) : 2959 - 2965