ACL-Net: Attribute-Aware Contrastive Learning Network for Medical Image Fusion

被引:0
作者
Liu, Yanyu [1 ,2 ]
Hou, Ruichao [3 ]
Ding, Zhaisheng [4 ]
Zhou, Dongming [5 ]
Cao, Jinde [6 ]
机构
[1] Yunnan Univ Finance & Econ, Sch Logist & Management Engn, Kunming 650221, Peoples R China
[2] Yunnan Univ Finance & Econ, Yunnan Key Lab Serv Comp, Kunming 650221, Peoples R China
[3] Nanjing Univ, State Key Lab Novel Software Technol, Nanjing 210023, Peoples R China
[4] Jiangsu Normal Univ, Sch Elect Engn & Automat, Xuzhou 221116, Peoples R China
[5] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650504, Peoples R China
[6] Southeast Univ, Sch Math, Nanjing 210096, Peoples R China
基金
中国国家自然科学基金;
关键词
Image fusion; Contrastive learning; Magnetic resonance imaging; Transforms; Transformers; Training; Medical diagnostic imaging; Electronic mail; Decoding; Feature extraction; medical image fusion; pixel intensity and structure transfer; FRAMEWORK;
D O I
10.1109/LSP.2025.3577945
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Medical image fusion aims to integrate multi-sensor source images into a unified representation, providing comprehensive and diagnostically enriched information to support clinical decision-making. However, the scarcity of labeled data presents significant challenges in effectively learning complementary features across modalities. In this paper, we propose a novel attribute-aware contrastive learning network, called ACL-Net, boosting medical image fusion performance. Specifically, we introduce the attribute transformation strategy to simulate variations in pixel intensity and structural patterns, guiding the model to focus on critical cross-modal information. In this way, it enhances contrastive learning by generating diverse negative pairs, thereby mitigating the scarcity of negative samples in unsupervised fusion scenarios. Extensive experiments demonstrate that our method achieves superior performance compared to state-of-the-art medical image fusion methods.
引用
收藏
页码:2484 / 2488
页数:5
相关论文
共 33 条
[1]   Multi-modal Gated Mixture of Local-to-Global Experts for Dynamic Image Fusion [J].
Cao, Bing ;
Sun, Yiming ;
Zhu, Pengfei ;
Hu, Qinghua .
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, :23498-23507
[2]  
Deng Yanglin, 2024, MM '24: Proceedings of the 32nd ACM International Conference on Multimedia, P7326, DOI 10.1145/3664647.3681085
[3]   Siamese networks and multi-scale local extrema scheme for multimodal brain medical image fusion [J].
Ding, Zhaisheng ;
Zhou, Dongming ;
Li, Haiyan ;
Hou, Ruichao ;
Liu, Yanyu .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2021, 68
[4]   Union Laplacian pyramid with multiple features for medical image fusion [J].
Du, Jiao ;
Li, Weisheng ;
Xiao, Bin ;
Nawaz, Qamar .
NEUROCOMPUTING, 2016, 194 :326-339
[5]   Momentum Contrast for Unsupervised Visual Representation Learning [J].
He, Kaiming ;
Fan, Haoqi ;
Wu, Yuxin ;
Xie, Saining ;
Girshick, Ross .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :9726-9735
[6]  
Henaff OJ, 2020, PR MACH LEARN RES, V119
[7]   Multi-Level Adaptive Attention Fusion Network for Infrared and Visible Image Fusion [J].
Hu, Ziming ;
Kong, Quan ;
Liao, Qing .
IEEE SIGNAL PROCESSING LETTERS, 2025, 32 :366-370
[8]   NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial/Channel Attention Models [J].
Li, Hui ;
Wu, Xiao-Jun ;
Durrani, Tariq .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2020, 69 (12) :9645-9656
[9]   Image matting for fusion of multi-focus images in dynamic scenes [J].
Li, Shutao ;
Kang, Xudong ;
Hu, Jianwen ;
Yang, Bin .
INFORMATION FUSION, 2013, 14 (02) :147-162
[10]   DFENet: A dual-branch feature enhanced network integrating transformers and convolutional feature learning for multimodal medical image fusion [J].
Li, Weisheng ;
Zhang, Yin ;
Wang, Guofen ;
Huang, Yuping ;
Li, Ruyue .
BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 80