Shape-intensity-guided U-net for medical image segmentation

被引:6
作者
Dong, Wenhui
Du, Bo
Xu, Yongchao [1 ]
机构
[1] Wuhan Univ, Inst Artificial Intelligence, Sch Comp Sci, Wuhan, Peoples R China
关键词
Medical image segmentation; Texture bias; Shape-intensity prior; Model generalization; NETWORK;
D O I
10.1016/j.neucom.2024.128534
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Medical image segmentation has achieved impressive results thanks to U-Net or its alternatives. Yet, most existing methods perform segmentation by classifying individual pixels, tending to ignore the shape-intensity prior information. This may yield implausible segmentation results. Besides, the segmentation performance often drops greatly on unseen datasets. One possible reason is that the model is biased towards texture information, which varies more than shape information across different datasets. In this paper, we introduce a novel Shape-Intensity-Guided U-Net (SIG-UNet) for improving the generalization ability of variants of UNet in segmenting medical images. Specifically, we adopt the U-Net architecture to reconstruct class-wisely averaged images that only contain the shape-intensity information. We then add an extra similar decoder branch with the reconstruction decoder for segmentation, and apply skip fusion between them. Since the class- wisely averaged image has no any texture information, the reconstruction decoder focuses more on shape and intensity features than the encoder on the original image. Therefore, the final segmentation decoder has less texture bias. Extensive experiments on three segmentation tasks of medical images with different modalities demonstrate that the proposed SIG-UNet achieves promising intra-dataset results while significantly improving the cross-dataset segmentation performance. The source code will be publicly available after acceptance.
引用
收藏
页数:12
相关论文
共 86 条
[1]   Dataset of breast ultrasound images [J].
Al-Dhabyani, Walid ;
Gomaa, Mohammed ;
Khaled, Hussien ;
Fahmy, Aly .
DATA IN BRIEF, 2020, 28
[2]  
Alom M.Z., 2018, arXiv
[3]   Improved inception-residual convolutional neural network for object recognition [J].
Alom, Md Zahangir ;
Hasan, Mahmudul ;
Yakopcic, Chris ;
Taha, Tarek M. ;
Asari, Vijayan K. .
NEURAL COMPUTING & APPLICATIONS, 2020, 32 (01) :279-293
[4]  
Azad R., 2024, P IEEECVF WINTER C A, P1287
[5]   WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians [J].
Bernal, Jorge ;
Javier Sanchez, F. ;
Fernandez-Esparrach, Gloria ;
Gil, Debora ;
Rodriguez, Cristina ;
Vilarino, Fernando .
COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2015, 43 :99-111
[6]   Understanding Robustness of Transformers for Image Classification [J].
Bhojanapalli, Srinadh ;
Chakrabarti, Ayan ;
Glasner, Daniel ;
Li, Daliang ;
Unterthiner, Thomas ;
Veit, Andreas .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :10211-10221
[7]  
Cao Hu, 2023, Computer Vision - ECCV 2022 Workshops: Proceedings. Lecture Notes in Computer Science (13803), P205, DOI 10.1007/978-3-031-25066-8_9
[8]   MaxStyle: Adversarial Style Composition for Robust Medical Image Segmentation [J].
Chen, Chen ;
Li, Zeju ;
Ouyang, Cheng ;
Sinclair, Matthew ;
Bai, Wenjia ;
Rueckert, Daniel .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT V, 2022, 13435 :151-161
[9]   Cooperative Training and Latent Space Data Augmentation for Robust Medical Image Segmentation [J].
Chen, Chen ;
Hammernik, Kerstin ;
Ouyang, Cheng ;
Qin, Chen ;
Bai, Wenjia ;
Rueckert, Daniel .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT III, 2021, 12903 :149-159
[10]   Degradation-adaptive neural network for jointly single image dehazing and desnowing [J].
Chen, Erkang ;
Chen, Sixiang ;
Ye, Tian ;
Liu, Yun .
FRONTIERS OF COMPUTER SCIENCE, 2024, 18 (02)