Multifocus Image Fusion With Complex Sparse Representation

被引:1
|
作者
Chen, Yuhang [1 ]
Liu, Yu [2 ]
Ward, Rabab K. [3 ]
Chen, Xun [1 ]
机构
[1] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230027, Peoples R China
[2] Hefei Univ Technol, Dept Biomed Engn, Hefei 230009, Peoples R China
[3] Univ British Columbia, Dept Elect & Comp Engn, Vancouver, BC V6T 1Z4, Canada
基金
中国国家自然科学基金;
关键词
Complex sparse representation (CSR); Hilbert transform; hypercomplex signals; image fusion; multisensor data fusion; sparse representation (SR); FOCUS; PERFORMANCE; ALGORITHM; EXTENSION; TRANSFORM; FRAMEWORK; NETWORK; SIGNALS; FILTER; DEEP;
D O I
10.1109/JSEN.2024.3411588
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Multifocus image fusion aims to merge source images with distinct focused areas into a single, fully focused fused image. Sparse representation (SR) stands out as a robust signal modeling technique that has achieved remarkable success in multifocus image fusion. Numerous SR-based fusion methods have been proposed over the years, underscoring the importance of SR in enhancing fusion quality. However, a fundamental problem appearing in most existing SR models is the absence of directionality. This deficiency restricts their capacity to extract intricate details. To address this issue, we propose the complex SR (CSR) model for image fusion. This model utilizes the properties of hypercomplex signals to extract directional information from real-valued signals through complex extension. Subsequently, the directional components of the input signal are decomposed into sparse coefficients over corresponding directional dictionaries. The key advantage of our design over conventional SR models is the ability to capture the geometrical image structures effectively, since CSR coefficients can provide precise measurements of detailed information along specific directions. Experimental results conducted on three widely used multifocus image fusion datasets substantiate the superiority of our method over 17 representative multifocus image fusion methods in terms of both visual quality and objective evaluation.
引用
收藏
页码:34744 / 34755
页数:12
相关论文
共 50 条
  • [21] Image Fusion with Double Sparse Representation in Wavelet Domain
    Wang Jun
    Peng Jinye
    Wu Jun
    Yan Kun
    PROCEEDINGS OF 2013 IEEE 4TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING AND SERVICE SCIENCE (ICSESS), 2012, : 1006 - 1009
  • [22] Image Fusion Method Based on Sparse and Redundant Representation
    Shi, Jianglin
    Liu, Changhai
    Xu, Rong
    Men, Tao
    PROCEEDINGS OF THE 28TH CONFERENCE OF SPACECRAFT TT&C TECHNOLOGY IN CHINA: OPENNESS, INTEGRATION AND INTELLIGENT INTERCONNECTION, 2018, 445 : 333 - 348
  • [23] Sparse representation with learned multiscale dictionary for image fusion
    Yin, Haitao
    NEUROCOMPUTING, 2015, 148 : 600 - 610
  • [24] An image fusion framework using morphology and sparse representation
    N. Aishwarya
    C. Bennila Thangammal
    Multimedia Tools and Applications, 2018, 77 : 9719 - 9736
  • [25] Image fusion with nonsubsampled contourlet transform and sparse representation
    Wang, Jun
    Peng, Jinye
    Feng, Xiaoyi
    He, Guiqing
    Wu, Jun
    Yan, Kun
    JOURNAL OF ELECTRONIC IMAGING, 2013, 22 (04)
  • [26] Remote sensing image fusion based on sparse representation
    Yin, W. (yinwen@sjtu.edu.cn), 2013, Chinese Optical Society (33):
  • [27] REMOTE SENSING IMAGE FUSION BASED ON SPARSE REPRESENTATION
    Yu, Xianchuan
    Gao, Guanyin
    Xu, Jindong
    Wang, Guian
    2014 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS), 2014,
  • [28] Erratum to: Image Fusion by Hierarchical Joint Sparse Representation
    Yao Yao
    Ping Guo
    Xin Xin
    Ziheng Jiang
    Cognitive Computation, 2015, 7 : 633 - 633
  • [29] An image fusion framework using morphology and sparse representation
    Aishwarya, N.
    Thangammal, C. Bennila
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (08) : 9719 - 9736
  • [30] Simultaneous image fusion and denoising with adaptive sparse representation
    Liu, Yu
    Wang, Zengfu
    IET IMAGE PROCESSING, 2015, 9 (05) : 347 - 357