Multi-Organ Segmentation From Partially Labeled and Unaligned Multi-Modal MRI in Thyroid-Associated Orbitopathy

被引:0
作者
Chen, Cheng [1 ]
Deng, Min [2 ,3 ]
Zhong, Yuan [1 ]
Cai, Jinyue [1 ]
Chan, Karen Kar Wun [4 ]
Dou, Qi [1 ,5 ]
Chong, Kelvin Kam Lung [4 ]
Heng, Pheng-Ann [5 ]
Chu, Winnie Chiu-Wing [2 ,3 ]
机构
[1] Chinese Univ Hong Kong, Dept Comp Sci & Engn, Hong Kong, Peoples R China
[2] Chinese Univ Hong Kong, Dept Imaging & Intervent Radiol, Hong Kong, Peoples R China
[3] Chinese Univ Hong Kong, CU Lab AI Radiol, Hong Kong, Peoples R China
[4] Chinese Univ Hong Kong, Dept Ophthalmol & Visual Sci, Hong Kong, Peoples R China
[5] Chinese Univ Hong Kong, Inst Med Intelligence, Hong Kong, Peoples R China
关键词
Magnetic resonance imaging; Image segmentation; Orbits; Training; Computed tomography; Data models; Visualization; Labeling; Aggregates; Muscles; Multi-modal segmentation; partial labels; thyroid-associated orbitopathy;
D O I
10.1109/JBHI.2025.3545138
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Thyroid-associated orbitopathy (TAO) is a prevalent inflammatory autoimmune disorder, leading to orbital disfigurement and visual disability. Automatic comprehensive segmentation tailored for quantitative multi-modal MRI assessment of TAO holds enormous promise but is still lacking. In this paper, we propose a novel method, named cross-modal attentive self-training (CMAST), for the multi-organ segmentation in TAO using partially labeled and unaligned multi-modal MRI data. Our method first introduces a dedicatedly designed cross-modal pseudo label self-training scheme, which leverages self-training to refine the initial pseudo labels generated by cross-modal registration, so as to complete the label sets for comprehensive segmentation. With the obtained pseudo labels, we further devise a learnable attentive fusion module to aggregate multi-modal knowledge based on learned cross-modal feature attention, which relaxes the requirement of pixel-wise alignment across modalities. A prototypical contrastive learning loss is further incorporated to facilitate cross-modal feature alignment. We evaluate our method on a large clinical TAO cohort with 100 cases of multi-modal orbital MRI. The experimental results demonstrate the promising performance of our method in achieving comprehensive segmentation of TAO-affected organs on both T1 and T1c modalities, outperforming previous methods by a large margin. Our code is available at: https://github.com/cchen-cc/CMAST.
引用
收藏
页码:4161 / 4172
页数:12
相关论文
共 49 条
[1]   Symmetric diffeomorphic image registration with cross-correlation: Evaluating automated labeling of elderly and neurodegenerative brain [J].
Avants, B. B. ;
Epstein, C. L. ;
Grossman, M. ;
Gee, J. C. .
MEDICAL IMAGE ANALYSIS, 2008, 12 (01) :26-41
[2]  
Avants B. B., 2009, Insight J, V2, P1, DOI DOI 10.54294/UVNHIN
[3]   Disentangle, Align and Fuse for Multimodal and Semi-Supervised Image Segmentation [J].
Chartsias, Agisilaos ;
Papanastasiou, Giorgos ;
Wang, Chengjia ;
Semple, Scott ;
Newby, David E. ;
Dharmakumar, Rohan ;
Tsaftaris, Sotirios A. .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2021, 40 (03) :781-792
[4]  
Chen T, 2020, PR MACH LEARN RES, V119
[5]   Bi-directional Cross-Modality Feature Propagation with Separation-and-Aggregation Gate for RGB-D Semantic Segmentation [J].
Chen, Xiaokang ;
Lin, Kwan-Yee ;
Wang, Jingbo ;
Wu, Wayne ;
Qian, Chen ;
Li, Hongsheng ;
Zeng, Gang .
COMPUTER VISION - ECCV 2020, PT XI, 2020, 12356 :561-577
[6]   MASS: Modality-collaborative semi-supervised segmentation by exploiting cross-modal consistency from unpaired CT and MRI images [J].
Chen, Xiaoyu ;
Zhou, Hong-Yu ;
Liu, Feng ;
Guo, Jiansen ;
Wang, Lianshen ;
Yu, Yizhou .
MEDICAL IMAGE ANALYSIS, 2022, 80
[7]   Cancer imaging phenomics toolkit: quantitative imaging analytics for precision diagnostics and predictive modeling of clinical outcome [J].
Davatzikos, Christos ;
Rathore, Saima ;
Bakas, Spyridon ;
Pati, Sarthak ;
Bergman, Mark ;
Kalarot, Ratheesh ;
Sridharan, Patmaa ;
Gastounioti, Aimilia ;
Jahani, Nariman ;
Cohen, Eric ;
Akbari, Hamed ;
Tunc, Birkan ;
Doshi, Jimit ;
Parker, Drew ;
Hsieh, Michael ;
Sotiras, Aristeidis ;
Li, Hongming ;
Ou, Yangming ;
Doot, Robert K. ;
Bilello, Michel ;
Fan, Yong ;
Shinohara, Russell T. ;
Yushkevich, Paul ;
Verma, Ragini ;
Kontos, Despina .
JOURNAL OF MEDICAL IMAGING, 2018, 5 (01)
[8]  
Deng RN, 2022, PR MACH LEARN RES, V172, P304
[9]   Learning Multi-Class Segmentations From Single-Class Datasets [J].
Dmitriev, Konstantin ;
Kaufman, Arie E. .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :9493-9503
[10]   HyperDense-Net: A Hyper-Densely Connected CNN for Multi-Modal Image Segmentation [J].
Dolz, Jose ;
Gopinath, Karthik ;
Yuan, Jing ;
Lombaert, Herve ;
Desrosiers, Christian ;
Ben Ayed, Ismail .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2019, 38 (05) :1116-1126