Less is More: Selective reduction of CT data for self-supervised pre-training of deep learning models with contrastive learning improves downstream classification performance

被引:0
|
作者
Wolf, Daniel [1 ,2 ]
Payer, Tristan [1 ]
Lisson, Catharina Silvia [2 ]
Lisson, Christoph Gerhard [2 ]
Beer, Meinrad [2 ]
Götz, Michael [2 ]
Ropinski, Timo [1 ]
机构
[1] Visual Computing Research Group, Institute of Media Informatics, Ulm University, James-Franck-Ring, Ulm,89081, Germany
[2] Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm,89081, Germany
基金
美国国家卫生研究院;
关键词
Medical imaging - Self-supervised learning - Semi-supervised learning;
D O I
10.1016/j.compbiomed.2024.109242
中图分类号
学科分类号
摘要
引用
收藏
相关论文
共 50 条
  • [1] Dense Contrastive Learning for Self-Supervised Visual Pre-Training
    Wang, Xinlong
    Zhang, Rufeng
    Shen, Chunhua
    Kong, Tao
    Li, Lei
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 3023 - 3032
  • [2] Contrastive Learning for Self-Supervised Pre-Training of Point Cloud Segmentation Networks With Image Data
    Janda, Andrej
    Wagstaff, Brandon
    Ng, Edwin G.
    Kelly, Jonathan
    2023 20TH CONFERENCE ON ROBOTS AND VISION, CRV, 2023, : 145 - 152
  • [3] LPCL: Localized prominence contrastive learning for self-supervised dense visual pre-training
    Chen, Zihan
    Zhu, Hongyuan
    Cheng, Hao
    Mi, Siya
    Zhang, Yu
    Geng, Xin
    PATTERN RECOGNITION, 2023, 135
  • [4] Class incremental learning with self-supervised pre-training and prototype learning
    Liu, Wenzhuo
    Wu, Xin-Jian
    Zhu, Fei
    Yu, Ming-Ming
    Wang, Chuang
    Liu, Cheng-Lin
    PATTERN RECOGNITION, 2025, 157
  • [5] Self-supervised Pre-training and Contrastive Representation Learning for Multiple-choice Video QA
    Kim, Seonhoon
    Jeong, Seohyeong
    Kim, Eunbyul
    Kang, Inho
    Kwak, Nojun
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 13171 - 13179
  • [6] Multimodal Visual-Tactile Representation Learning through Self-Supervised Contrastive Pre-Training
    Dave, Vedant
    Lygerakis, Fotios
    Rueckert, Elmar
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2024, 2024, : 8013 - 8020
  • [7] Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging
    Daniel Wolf
    Tristan Payer
    Catharina Silvia Lisson
    Christoph Gerhard Lisson
    Meinrad Beer
    Michael Götz
    Timo Ropinski
    Scientific Reports, 13
  • [8] Self-supervised pre-training with contrastive and masked autoencoder methods for dealing with small datasets in deep learning for medical imaging
    Wolf, Daniel
    Payer, Tristan
    Lisson, Catharina Silvia
    Lisson, Christoph Gerhard
    Beer, Meinrad
    Gotz, Michael
    Ropinski, Timo
    SCIENTIFIC REPORTS, 2023, 13 (01)
  • [9] Deep learning based on self-supervised pre-training: Application on sandstone content prediction
    Wang, Chong Ming
    Wang, Xing Jian
    Chen, Yang
    Wen, Xue Mei
    Zhang, Yong Heng
    Li, Qing Wu
    FRONTIERS IN EARTH SCIENCE, 2023, 10
  • [10] Self-supervised pre-training improves fundus image classification for diabetic retinopathy
    Lee, Joohyung
    Lee, Eung-Joo
    REAL-TIME IMAGE PROCESSING AND DEEP LEARNING 2022, 2022, 12102