MoCo-CXR: MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models

被引:0
作者
Sowrirajan, Hari [1 ]
Yang, Jingbo [1 ]
Ng, Andrew Y. [1 ]
Rajpurkar, Pranav [1 ]
机构
[1] Stanford Univ, Dept Comp Sci, Stanford, CA 94305 USA
来源
MEDICAL IMAGING WITH DEEP LEARNING, VOL 143 | 2021年 / 143卷
关键词
Contrastive Learning; Chest X-Rays;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contrastive learning is a form of self-supervision that can leverage unlabeled data to produce pre-trained models. While contrastive learning has demonstrated promising results on natural image classification tasks, its application to medical imaging tasks like chest X-ray interpretation has been limited. In this work, we propose MoCo-CXR, which is an adaptation of the contrastive learning method Momentum Contrast (MoCo), to produce models with better representations and initializations for the detection of pathologies in chest X-rays. In detecting pleural effusion, we find that linear models trained on MoCo-CXR-pretrained representations outperform those without MoCo-CXR-pretrained representations, indicating that MoCo-CXR-pretrained representations are of higher-quality. End-to-end fine-tuning experiments reveal that a model initialized via MoCo-CXR-pretraining outperforms its non-MoCo-CXR-pretrained counterpart. We find that MoCoCXR-pretraining provides the most benefit with limited labeled training data. Finally, we demonstrate similar results on a target Tuberculosis dataset unseen during pretraining, indicating that MoCo-CXR-pretraining endows models with representations and transferability that can be applied across chest X-ray datasets and tasks.
引用
收藏
页码:728 / 744
页数:17
相关论文
共 30 条
[1]  
Bachman P, 2019, ADV NEUR IN, V32
[2]  
Bai WJ, 2019, Arxiv, DOI arXiv:1907.02757
[3]   PadChest: A large chest x-ray image dataset with multi-label annotated reports [J].
Bustos, Aurelia ;
Pertusa, Antonio ;
Salinas, Jose-Maria ;
de la Iglesia-Vaya, Maria .
MEDICAL IMAGE ANALYSIS, 2020, 66
[4]  
Chaitanya K., 2020, Contrastive learning of global and local features for medical image segmentation with limited annotations
[5]  
Chen T, 2020, Arxiv, DOI [arXiv:2002.05709, DOI 10.48550/ARXIV.2002.05709]
[6]  
Chen T, 2020, Arxiv, DOI arXiv:2006.10029
[7]  
Chen XL, 2020, Arxiv, DOI [arXiv:2003.04297, 10.48550/arXiv.2003.04297]
[8]   Not-so-supervised: A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis [J].
Cheplygina, Veronika ;
de Bruijne, Marleen ;
Pluim, Josien P. W. .
MEDICAL IMAGE ANALYSIS, 2019, 54 :280-296
[9]   Unsupervised Visual Representation Learning by Context Prediction [J].
Doersch, Carl ;
Gupta, Abhinav ;
Efros, Alexei A. .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1422-1430
[10]  
Johnson AEW, 2019, Arxiv, DOI [arXiv:1901.07042, 10.48550/arXiv.1901.07042]