A novel sparse representation based fusion approach for multi-focus images

被引:21
|
作者
Tang, Dan [1 ]
Xiong, Qingyu [2 ,3 ]
Yin, Hongpeng [1 ]
Zhu, Zhiqin [4 ]
Li, Yanxia [1 ]
机构
[1] Chongqing Univ, Coll Automat, Chongqing 400030, Peoples R China
[2] Chongqing Univ, Minist Educ, Key Lab Dependable Serv Comp Cyber Phys Soc, Chongqing, Peoples R China
[3] Chongqing Univ, Sch Big Data & Software Engn, Chongqing 400030, Peoples R China
[4] Chongqing Univ Posts & Telecommun, Coll Automat, Chongqing 400065, Peoples R China
关键词
Multi-focus image fusion; Sparse presentation; Dictionary construction; Joint patch grouping; INFORMATION; ALGORITHM; PERFORMANCE; TRANSFORM;
D O I
10.1016/j.eswa.2022.116737
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-focus image fusion aims at combining multiple partially focused images of the same scenario into an all focused image, and one of the most effective methods for image fusion is sparse representation. Traditional sparse representation based fusion method uses all of the image patches for dictionary learning, which brings unvalued information, resulting in artifacts and extra calculating time. To remove unvalued information and build a compact dictionary, in this sparse representation based fusion approach, a novel dictionary constructing method based on joint patch grouping and informative sampling is proposed. Nonlocal similarity is introduced into joint patch grouping, and each source image is not considered independently. Patches of all source images with similar structures are flagged as a group, and only one class of informative image patch is selected in dictionary learning for simplifying the calculation. The orthogonal matching pursuit (OMP) algorithm is performed to obtain sparse coefficients, and max-L1 fusion role is adopted to reconstruct fused images. The experimental results show the superiority of the proposed approach.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Multi-focus image fusion based on multi-scale sparse representation
    Ma, Xiaole
    Wang, Zhihai
    Hu, Shaohai
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 81
  • [2] A novel sparse-representation-based multi-focus image fusion approach
    Yin, Hongpeng
    Li, Yanxia
    Chai, Yi
    Liu, Zhaodong
    Zhu, Zhiqin
    NEUROCOMPUTING, 2016, 216 : 216 - 229
  • [3] Robust sparse representation based multi-focus image fusion with dictionary construction and local spatial consistency
    Zhang, Qiang
    Shi, Tao
    Wang, Fan
    Blum, Rick S.
    Han, Jungong
    PATTERN RECOGNITION, 2018, 83 : 299 - 313
  • [4] Multi-focus image fusion based on joint sparse representation and optimum theory
    Ma, Xiaole
    Hu, Shaohai
    Liu, Shuaiqi
    Fang, Jing
    Xu, Shuwen
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2019, 78 : 125 - 134
  • [5] A multi-focus image fusion framework based on multi-scale sparse representation in gradient domain
    Wang, Yu
    Li, Xiongfei
    Zhu, Rui
    Wang, Zeyu
    Feng, Yuncong
    Zhang, Xiaoli
    SIGNAL PROCESSING, 2021, 189
  • [6] Unfolding coupled convolutional sparse representation for multi-focus image fusion
    Zheng, Kecheng
    Cheng, Juan
    Liu, Yu
    INFORMATION FUSION, 2025, 118
  • [7] Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review
    Zhang, Qiang
    Liu, Yi
    Blum, Rick S.
    Han, Jungong
    Tao, Dacheng
    INFORMATION FUSION, 2018, 40 : 57 - 75
  • [8] New insights into multi-focus image fusion: A fusion method based on multi-dictionary linear sparse representation and region fusion model
    Wang, Jiwei
    Qu, Huaijing
    Zhang, Zhisheng
    Xie, Ming
    INFORMATION FUSION, 2024, 105
  • [9] Multi-focus image fusion using dictionary-based sparse representation
    Nejati, Mansour
    Samavi, Shadrokh
    Shirani, Shahram
    INFORMATION FUSION, 2015, 25 : 72 - 84
  • [10] Multi-focus image fusion via online convolutional sparse coding
    Zhang, Chengfang
    Zhang, Ziyou
    Li, Haoyue
    He, Sidi
    Feng, Ziliang
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (06) : 17327 - 17356