Disentangle domain features for cross-modality cardiac image segmentation

被引:54
作者
Pei, Chenhao [1 ]
Wu, Fuping [2 ,3 ]
Huang, Liqin [1 ]
Zhuang, Xiahai [2 ]
机构
[1] Fuzhou Univ, Coll Phys & Informat Engn, Fuzhou 350108, Peoples R China
[2] Fudan Univ, Sch Data Sci, Shanghai 200433, Peoples R China
[3] Fudan Univ, Sch Management, Dept Stat, Shanghai 200433, Peoples R China
基金
中国国家自然科学基金;
关键词
Domain adaptation; Disentangle; Cardiac segmentation; Zero loss;
D O I
10.1016/j.media.2021.102078
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unsupervised domain adaptation (UDA) generally learns a mapping to align the distribution of the source domain and target domain. The learned mapping can boost the performance of the model on the target data, of which the labels are unavailable for model training. Previous UDA methods mainly focus on domain-invariant features (DIFs) without considering the domain-specific features (DSFs), which could be used as complementary information to constrain the model. In this work, we propose a new UDA framework for cross-modality image segmentation. The framework first disentangles each domain into the DIFs and DSFs. To enhance the representation of DIFs, self-attention modules are used in the encoder which allows attention-driven, long-range dependency modeling for image generation tasks. Furthermore, a zero loss is minimized to enforce the information of target (source) DSFs, contained in the source (target) images, to be as close to zero as possible. These features are then iteratively decoded and encoded twice to maintain the consistency of the anatomical structure. To improve the quality of the generated images and segmentation results, several discriminators are introduced for adversarial learning. Finally, with the source data and their DIFs, we train a segmentation network, which can be applicable to target images. We validated the proposed framework for cross-modality cardiac segmentation using two public datasets, and the results showed our method delivered promising performance and compared favorably to stateof-the-art approaches in terms of segmentation accuracies. The source code of this work will be released via https://zmiclab.github.io/projects.html , once this manuscript is accepted for publication. (c) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页数:11
相关论文
共 52 条
[1]  
[Anonymous], 2017, DOMAIN ADAPTATION VI
[2]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[3]   Domain Intersection and Domain Difference [J].
Benaim, Sagie ;
Khaitov, Michael ;
Galanti, Tomer ;
Wolf, Lior .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :3444-3452
[4]  
Bengio Y., 2012, P ICML WORKSH UNS TR, V7, P19
[5]  
Cai RC, 2019, PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2060
[6]   Dual-core steered non-rigid registration for multi-modal images via bi-directional image synthesis [J].
Cao, Xiaohuan ;
Yang, Jianhua ;
Gao, Yaozong ;
Guo, Yanrong ;
Wu, Guorong ;
Shen, Dinggang .
MEDICAL IMAGE ANALYSIS, 2017, 41 :18-31
[7]  
Chen C, 2019, AAAI CONF ARTIF INTE, P3296
[8]  
Chen Chao, 2019, ARXIV191211976
[9]   Unsupervised Multi-modal Style Transfer for Cardiac MR Segmentation [J].
Chen, Chen ;
Ouyang, Cheng ;
Tarroni, Giacomo ;
Schlemper, Jo ;
Qiu, Huaqi ;
Bai, Wenjia ;
Rueckert, Daniel .
STATISTICAL ATLASES AND COMPUTATIONAL MODELS OF THE HEART: MULTI-SEQUENCE CMR SEGMENTATION, CRT-EPIGGY AND LV FULL QUANTIFICATION CHALLENGES, 2020, 12009 :209-219
[10]  
Chen C, 2019, AAAI CONF ARTIF INTE, P865