Big Self-Supervised Models Advance Medical Image Classification

被引:253
作者
Azizi, Shekoofeh [1 ]
Mustafa, Basil [1 ]
Ryan, Fiona [1 ,2 ,3 ]
Beaver, Zachary [1 ]
Freyberg, Jan [1 ]
Deaton, Jonathan [1 ]
Loh, Aaron [1 ]
Karthikesalingam, Alan [1 ]
Kornblith, Simon [1 ]
Chen, Ting [1 ]
Natarajan, Vivek [1 ]
Norouzi, Mohammad [1 ]
机构
[1] Google, Res & Hlth, Mountain View, CA 94043 USA
[2] Google, Mountain View, CA USA
[3] Georgia Inst Technol, Atlanta, GA 30332 USA
来源
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021) | 2021年
关键词
D O I
10.1109/ICCV48922.2021.00346
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised pretraining followed by supervised fine-tuning has seen success in image recognition, especially when labeled examples are scarce, but has received limited attention in medical image analysis. This paper studies the effectiveness of self-supervised learning as a pretraining strategy for medical image classification. We conduct experiments on two distinct tasks: dermatology condition classification from digital camera images and multilabel chest X-ray classification, and demonstrate that self-supervised learning on ImageNet, followed by additional self-supervised learning on unlabeled domain-specific medical images significantly improves the accuracy of medical image classifiers. We introduce a novel Multi-Instance Contrastive Learning (MICLe) method that uses multiple images of the underlying pathology per patient case, when available, to construct more informative positive pairs for self-supervised learning. Combining our contributions, we achieve an improvement of 6.7% in top-1 accuracy and an improvement of 1.1% in mean AUC on dermatology and chest X-ray classification respectively, outperforming strong supervised baselines pretrained on ImageNet. In addition, we show that big self-supervised models are robust to distribution shift and can learn efficiently with a small number of labeled medical images.
引用
收藏
页码:3458 / 3468
页数:11
相关论文
共 53 条
[1]   Towards a Better Understanding of Transfer Learning for Medical Imaging: A Case Study [J].
Alzubaidi, Laith ;
Fadhel, Mohammed A. ;
Al-Shamma, Omran ;
Zhang, Jinglan ;
Santamaria, J. ;
Duan, Ye ;
Oleiwi, Sameer R. .
APPLIED SCIENCES-BASEL, 2020, 10 (13)
[2]  
[Anonymous], 2019, P AAAI C ART INT
[3]  
Bachman P, 2019, ADV NEUR IN, V32
[4]   Self-Supervised Learning for Cardiac MR Image Segmentation by Anatomical Position Prediction [J].
Bai, Wenjia ;
Chen, Chen ;
Tarroni, Giacomo ;
Duan, Jinming ;
Guitton, Florian ;
Petersen, Steffen E. ;
Guo, Yike ;
Matthews, Paul M. ;
Rueckert, Daniel .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT II, 2019, 11765 :541-549
[5]   SELF-ORGANIZING NEURAL NETWORK THAT DISCOVERS SURFACES IN RANDOM-DOT STEREOGRAMS [J].
BECKER, S ;
HINTON, GE .
NATURE, 1992, 355 (6356) :161-163
[6]  
Chaitanya Krishna, 2020, ARXIV200610511
[7]   Self-supervised learning for medical image analysis using image context restoration [J].
Chen, Liang ;
Bentley, Paul ;
Mori, Kensaku ;
Misawa, Kazunari ;
Fujiwara, Michitaka ;
Rueckert, Daniel .
MEDICAL IMAGE ANALYSIS, 2019, 58
[8]  
Chen T., 2020, P 34 INT C NEUR INF, P22243, DOI 10.5555/3495724.3497589
[9]  
Chen T, 2020, PR MACH LEARN RES, V119
[10]   Knowledge-guided Deep Reinforcement Learning for Interactive Recommendation [J].
Chen, Xiaocong ;
Huang, Chaoran ;
Yao, Lina ;
Wang, Xianzhi ;
Liu, Wei ;
Zhang, Wenjie .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,