RefineU-Net: Improved U-Net with progressive global feedbacks and residual attention guided local refinement for medical image segmentation

被引:38
作者
Lin, Dongyun [1 ]
Li, Yiqun [1 ]
Nwe, Tin Lay [1 ]
Dong, Sheng [1 ]
Oo, Zaw Min [1 ]
机构
[1] ASTAR, Inst Infocomm Res, 1 Fusionopolis Way,21-01 Connexis,South Tower, Singapore 138632, Singapore
关键词
U-Net; Medical image segmentation; Progressive global feedbacks; Local refinement; Residual attention gate;
D O I
10.1016/j.patrec.2020.07.013
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Motivated by the recent advances in medical image segmentation using a fully convolutional network (FCN) called U-Net and its modified variants, we propose a novel improved FCN architecture called RefineU-Net. The proposed RefineU-Net consists of three modules: encoding module (EM), global refinement module (GRM) and local refinement module (LRM). EM is backboned by pretrained VGG-16 using ImageNet. GRM is proposed to generate intermediate layers in the skip connections in U-Net. It progressively upsamples the top side output of EM and fuses the resulted upsampled features with the side outputs of EM at each resolution level. Such fused features combine the global context information in shallow layers and the semantic information in deep layers for global refinement. Subsequently, to facilitate local refinement, LRM is proposed using residual attention gate (RAG) to generate discriminative attentive features to be concatenated with the decoded features in the expansive path of U-Net. Three modules are trained jointly in an end-to-end manner thereby both global and local refinement are performed complementarily. Extensive experiments conducted on four public datasets of polyp and skin lesion segmentation show the superiority of the proposed RefineU-Net to multiple state-of-the-art related methods. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:267 / 275
页数:9
相关论文
共 50 条
  • [21] Shape-intensity-guided U-net for medical image segmentation
    Dong, Wenhui
    Du, Bo
    Xu, Yongchao
    NEUROCOMPUTING, 2024, 610
  • [22] Attention guided U-Net for accurate iris segmentation
    Lian, Sheng
    Luo, Zhiming
    Zhong, Zhun
    Lin, Xiang
    Su, Songzhi
    Li, Shaozi
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 56 : 296 - 304
  • [23] Wavelet U-Net for Medical Image Segmentation
    Ying Li
    Yu Wang
    Tuo Leng
    Wen Zhijie
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT I, 2020, 12396 : 800 - 810
  • [24] BCU-Net: Bridging ConvNeXt and U-Net for medical image segmentation
    Zhang, Hongbin
    Zhong, Xiang
    Li, Guangli
    Liu, Wei
    Liu, Jiawei
    Ji, Donghong
    Li, Xiong
    Wu, Jianguo
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 159
  • [25] MSRD-Unet: Multiscale Residual Dilated U-Net for Medical Image Segmentation
    Khalaf, Muna
    Dhannoon, Ban N.
    BAGHDAD SCIENCE JOURNAL, 2022, 19 (06) : 1603 - 1611
  • [26] CSCA U-Net: A channel and space compound attention CNN for medical image segmentation
    Shu, Xin
    Wang, Jiashu
    Zhang, Aoping
    Shi, Jinlong
    Wu, Xiao-Jun
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 150
  • [27] Multi-Convolutional Channel Residual Spatial Attention U-Net for Industrial and Medical Image Segmentation
    Chen, Haoyu
    Kim, Kyungbaek
    IEEE ACCESS, 2024, 12 : 76089 - 76101
  • [28] TransAttUnet: Multi-Level Attention-Guided U-Net With Transformer for Medical Image Segmentation
    Chen, Bingzhi
    Liu, Yishu
    Zhang, Zheng
    Lu, Guangming
    Kong, Adams Wai Kin
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (01): : 55 - 68
  • [29] Attention-guided hierarchical fusion U-Net for uncertainty-driven medical image segmentation
    Munia, Afsana Ahmed
    Abdar, Moloud
    Hasan, Mehedi
    Jalali, Mohammad S.
    Banerjee, Biplab
    Khosravi, Abbas
    Hossain, Ibrahim
    Fu, Huazhu
    Frangi, Alejandro F.
    INFORMATION FUSION, 2025, 115
  • [30] Design of Superpiexl U-Net Network for Medical Image Segmentation
    Wang H.
    Liu H.
    Guo Q.
    Deng K.
    Zhang C.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2019, 31 (06): : 1007 - 1017