Multiple representation contrastive self-supervised learning for pulmonary nodule detection

被引:0
作者
Torki, Asghar [1 ]
Adibi, Peyman [1 ]
Kashani, Hamidreza Baradaran [1 ]
机构
[1] Univ Isfahan, Fac Comp Engn, Artificial Intelligence Dept, Esfahan, Iran
关键词
Contrastive learning; Self-supervised learning; Transformation invariant subspaces; Representation learning; Pulmonary nodule detection;
D O I
10.1016/j.knosys.2024.112307
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised learning aims to create semantic-enriched representation from unannotated data. A prevalent strategy in this field involves training a unified representation space that is invariant to various transformation combinations. However, creating a single invariant representation to multiple transformations poses several challenges. The efficacy of such a representation space depends on factors such as the intensity, sequence, and various combination scenarios of transformations. As a result, features generated in single representation space may exhibit limited adaptability for subsequent tasks. In contrast to the conventional SSL training approach, we introduce a novel method that involves constructing multiple atomic transformation-invariant representation subspaces. Each subspace in the proposed method is invariant to a specific atomic transformation from a predefined reference set. Our method offers increased flexibility by enabling the downstream task to weigh every atomic transformation-invariant subspace based on the desired feature space. A series of experiments were conducted to compare our approach to traditional self-supervised learning methods in order to assess its effectiveness. This evaluation encompassed diverse data regimes, datasets, evaluation protocols, and perspectives on source-destination data distribution. Our results highlight the superiority of our method compared to training strategies based on single transformation-invariant representation spaces. Additionally, our proposed method demonstrated superior performance in reducing false positives in the context of pulmonary nodule detection when compared to several recent supervised and self-supervised approaches.
引用
收藏
页数:13
相关论文
共 60 条
[1]   Deep Clustering for Unsupervised Learning of Visual Features [J].
Caron, Mathilde ;
Bojanowski, Piotr ;
Joulin, Armand ;
Douze, Matthijs .
COMPUTER VISION - ECCV 2018, PT XIV, 2018, 11218 :139-156
[2]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733
[3]   Self-supervised learning for medical image analysis using image context restoration [J].
Chen, Liang ;
Bentley, Paul ;
Mori, Kensaku ;
Misawa, Kazunari ;
Fujiwara, Michitaka ;
Rueckert, Daniel .
MEDICAL IMAGE ANALYSIS, 2019, 58
[4]  
Chen S., 2019, CoRR
[5]  
Chen T, 2020, PR MACH LEARN RES, V119
[6]   Exploring Simple Siamese Representation Learning [J].
Chen, Xinlei ;
He, Kaiming .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :15745-15753
[7]  
Cosentino R., 2022, arXiv
[8]   Randaugment: Practical automated data augmentation with a reduced search space [J].
Cubuk, Ekin D. ;
Zoph, Barret ;
Shlens, Jonathon ;
Le, Quoc, V .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, :3008-3017
[9]   AutoAugment: Learning Augmentation Strategies from Data [J].
Cubuk, Ekin D. ;
Zoph, Barret ;
Mane, Dandelion ;
Vasudevan, Vijay ;
Le, Quoc V. .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :113-123
[10]  
Dangovski R., 2022, INT C LEARN REPR