Interpretive Self-Supervised Pre-training: Boosting Performance on Visual Medical Data

被引:1
作者
Manna, Siladittya [1 ]
Bhattacharya, Saumik [2 ]
Pal, Umapada [1 ]
机构
[1] Indian Stat Inst, Kolkata, W Bengal, India
[2] Indian Inst Technol Kharagpur, Kharagpur, W Bengal, India
来源
PROCEEDINGS OF THE TWELFTH INDIAN CONFERENCE ON COMPUTER VISION, GRAPHICS AND IMAGE PROCESSING, ICVGIP 2021 | 2021年
关键词
Self-Supervised Learning; Contrastive Learning; Cosine Similarity; Loss function; Lower bound; Pre-training; Medical Data; CONTEXT;
D O I
10.1145/3490035.3490273
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised learning algorithms have become one of the best tools for unsupervised representation learning. Although self-supervised algorithms have achieved state-of-the-art performance for classification tasks in the case of natural image data, their application on medical data has been limited. In this work, we have proposed a novel loss function and derive it's asymptotic lower bound. We have also shown that self-supervised pre-training with the proposed loss function helps in surpassing the supervised baseline on the downstream task. We have also shown that the self-supervised pretraining helps a model in learning better representation in general to achieve better performance compared to supervised baselines. We have mathematically derived that the contrastive loss function asymptotically treats each sample as a separate class and works by maximizing the distance between any two samples and this helps to get better performance. Finally, through exhaustive experiments, we demonstrate that self-supervised pre-training helps to surpass the performance of fully supervised models on downstream tasks.
引用
收藏
页数:9
相关论文
共 41 条
[1]   Video Jigsaw: Unsupervised Learning of Spatiotemporal Context for Video Action Recognition [J].
Ahsan, Unaiza ;
Madhok, Rishi ;
Essa, Irfan .
2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2019, :179-189
[2]  
[Anonymous], 2015, 2015 IEEE INT C COMP, P1026
[3]  
[Anonymous], 2019, 2019 IEEE CVF C COMP, P1910
[4]  
Bachman Philip, 2019, Learning Representations by Maximizing Mutual Information Across Views
[5]   Deep-learning-assisted diagnosis for knee magnetic resonance imaging: Development and retrospective validation of MRNet [J].
Bien, Nicholas ;
Rajpurkar, Pranav ;
Ball, Robyn L. ;
Irvin, Jeremy ;
Park, Allison ;
Jones, Erik ;
Bereket, Michael ;
Patel, Bhavik N. ;
Yeom, Kristen W. ;
Shpanskaya, Katie ;
Halabi, Safwan ;
Zucker, Evan ;
Fanton, Gary ;
Amanatullah, Derek F. ;
Beaulieu, Christopher F. ;
Riley, Geoffrey M. ;
Stewart, Russell J. ;
Blankenberg, Francis G. ;
Larson, David B. ;
Jones, Ricky H. ;
Langlotz, Curtis P. ;
Ng, Andrew Y. ;
Lungren, Matthew P. .
PLOS MEDICINE, 2018, 15 (11)
[6]   Sustained Self-Supervised Pretraining for Temporal Order Verification [J].
Buckchash, Himanshu ;
Raman, Balasubramanian .
PATTERN RECOGNITION AND MACHINE INTELLIGENCE, PREMI 2019, PT I, 2019, 11941 :140-149
[7]  
Caron M, 2020, ADV NEUR IN, V33
[8]   Dealing with difficult minority labels in imbalanced mutilabel data sets [J].
Charte, Francisco ;
Rivera, Antonio J. ;
del Jesus, Maria J. ;
Herrera, Francisco .
NEUROCOMPUTING, 2019, 326 :39-53
[9]  
Chen T, 2020, PR MACH LEARN RES, V119
[10]  
Chen XL, 2020, Arxiv, DOI arXiv:2003.04297