Semi-supervised segmentation of lung CT images based on contrastive learning

被引:0
作者
Yiwen Qi [1 ]
Caibin Yao [1 ]
Hao Chen [1 ]
Xufei Wang [2 ]
机构
[1] Fuzhou University,College of Electrical Engineering and Automation
[2] Shenyang Aerospace University,School of Automation
关键词
Lung CT image; Contrastive learning; Semi-supervised segmentation; Attention mechanism;
D O I
10.1007/s11760-025-04142-3
中图分类号
学科分类号
摘要
Accurate segmentation of lesions in lung CT images remains challenging due to blurred boundaries, small lesion sizes, and the scarcity of annotated data. To address these issues, this paper proposes a semi-supervised contrastive learning framework with a novel multiple attention UNet (MA-UNet) for lung CT image segmentation. The MA-UNet integrates a dual-attention module (DAM) and attention gates (AGs) to enhance spatial-channel feature refinement and boundary sensitivity. The DAM captures global context and channel-wise dependencies, while the AG emphasizes lesion-related features. Furthermore, residual blocks are used to improve gradient propagation and computational efficiency. To overcome limited annotations, we propose a contrastive learning framework that can fully utilize both labeled and unlabeled data to improve segmentation accuracy. To verify the validity of the methods and parameters design in this paper, we systematically carry out multiple ablation experiments. The experimental results show that the Dice, MIoU and Recall scores of MA-UNet based on comparative learning with only 1/2 ratio of labeled data are 78.41%, 88.78% and 91.79%, respectively, which are close to its supervised segmentation model, which effectively overcomes the problem of lack of labeled data.
引用
收藏
相关论文
共 50 条
  • [41] Interpolation-Based Contrastive Learning for Few-Label Semi-Supervised Learning
    Yang, Xihong
    Hu, Xiaochang
    Zhou, Sihang
    Liu, Xinwang
    Zhu, En
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 2054 - 2065
  • [42] A New Method for Semi-Supervised Segmentation of Satellite Images
    Sharifzadeh, Sara
    Amiri, Sam
    Abdi, Salman
    2021 22ND IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY (ICIT), 2021, : 832 - 837
  • [43] MetaCL: a semi-supervised meta learning architecture via contrastive learning
    Chengyang Li
    Yongqiang Xie
    Zhongbo Li
    Liping Zhu
    International Journal of Machine Learning and Cybernetics, 2024, 15 : 227 - 236
  • [44] MetaCL: a semi-supervised meta learning architecture via contrastive learning
    Li, Chengyang
    Xie, Yongqiang
    Li, Zhongbo
    Zhu, Liping
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (02) : 227 - 236
  • [45] Semi-Supervised Contrastive Learning With Similarity Co-Calibration
    Zhang, Yuhang
    Zhang, Xiaopeng
    Li, Jie
    Qiu, Robert C.
    Xu, Haohang
    Tian, Qi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 1749 - 1759
  • [46] SSCL: Semi-supervised Contrastive Learning for Industrial Anomaly Detection
    Cai, Wei
    Gao, Jiechao
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IV, 2024, 14428 : 100 - 112
  • [47] A Semi-supervised Classification Method of Parasites Using Contrastive Learning
    Ren, Yanni
    Jiang, Hao
    Zhu, Huilin
    Tian, Yanling
    Hu, Jinglu
    IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING, 2022, 17 (03) : 445 - 453
  • [48] Semi-Supervised Contrastive Learning for Time Series Classification in Healthcare
    Liu, Xiaofeng
    Liu, Zhihong
    Li, Jie
    Zhang, Xiang
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025, 9 (01): : 318 - 331
  • [49] Semi-supervised hybrid contrastive learning for PolSAR image classification
    Hua, Wenqiang
    Sun, Nan
    Liu, Lin
    Ding, Chen
    Dong, Yizhuo
    Sun, Wei
    KNOWLEDGE-BASED SYSTEMS, 2025, 311
  • [50] Echocardiographic segmentation based on semi-supervised deep learning with attention mechanism
    Jiajun Liang
    Huijuan Pan
    Zhuo Xiang
    Jing Qin
    Yali Qiu
    Libao Guo
    Tianfu Wang
    Wei Jiang
    Baiying Lei
    Multimedia Tools and Applications, 2024, 83 : 36953 - 36973