Less is More: Selective reduction of CT data for self-supervised pre-training of deep learning models with contrastive learning improves downstream classification performance

被引:0
|
作者
Wolf, Daniel [1 ,2 ]
Payer, Tristan [1 ]
Lisson, Catharina Silvia [2 ]
Lisson, Christoph Gerhard [2 ]
Beer, Meinrad [2 ]
Götz, Michael [2 ]
Ropinski, Timo [1 ]
机构
[1] Visual Computing Research Group, Institute of Media Informatics, Ulm University, James-Franck-Ring, Ulm,89081, Germany
[2] Experimental Radiology Research Group, Department for Diagnostic and Interventional Radiology, Ulm University Medical Center, Albert Einstein Allee, Ulm,89081, Germany
基金
美国国家卫生研究院;
关键词
Medical imaging - Self-supervised learning - Semi-supervised learning;
D O I
10.1016/j.compbiomed.2024.109242
中图分类号
学科分类号
摘要
引用
收藏
相关论文
共 50 条
  • [21] AN ADAPTER BASED PRE-TRAINING FOR EFFICIENT AND SCALABLE SELF-SUPERVISED SPEECH REPRESENTATION LEARNING
    Kessler, Samuel
    Thomas, Bethan
    Karout, Salah
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3179 - 3183
  • [22] Self-supervised graph neural network with pre-training generative learning for recommendation systems
    Min, Xin
    Li, Wei
    Yang, Jinzhao
    Xie, Weidong
    Zhao, Dazhe
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [23] Self-supervised graph neural network with pre-training generative learning for recommendation systems
    Xin Min
    Wei Li
    Jinzhao Yang
    Weidong Xie
    Dazhe Zhao
    Scientific Reports, 12
  • [24] Pre-training Question Embeddings for Improving Knowledge Tracing with Self-supervised Bi-graph Co-contrastive Learning
    Wang, Wentao
    Ma, Huifang
    Zhao, Yan
    Li, Zhixin
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (04)
  • [25] W2V-BERT: COMBINING CONTRASTIVE LEARNING AND MASKED LANGUAGE MODELING FOR SELF-SUPERVISED SPEECH PRE-TRAINING
    Chung, Yu-An
    Zhang, Yu
    Han, Wei
    Chiu, Chung-Cheng
    Qin, James
    Pang, Ruoming
    Wu, Yonghui
    2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU), 2021, : 244 - 250
  • [26] Censer: Curriculum Semi-supervised Learning for Speech Recognition Based on Self-supervised Pre-training
    Zhang, Bowen
    Cao, Songjun
    Zhang, Xiaoming
    Zhang, Yike
    Ma, Long
    Shinozaki, Takahiro
    INTERSPEECH 2022, 2022, : 2653 - 2657
  • [27] ADCL: Adversarial Distilled Contrastive Learning on lightweight models for self-supervised image classification
    Wu, Ran
    Liu, Huanyu
    Li, Jun-Bao
    KNOWLEDGE-BASED SYSTEMS, 2023, 278
  • [28] Multi-view Contrastive Self-Supervised Learning of Accounting Data Representations for Downstream Audit Tasks
    Schreyer, Marco
    Sattarov, Timur
    Borth, Damian
    ICAIF 2021: THE SECOND ACM INTERNATIONAL CONFERENCE ON AI IN FINANCE, 2021,
  • [29] Enhancing prognostics for sparse labeled data using advanced contrastive self-supervised learning with downstream integration
    Deng, Weikun
    Nguyen, Khanh T. P.
    Gogu, Christian
    Medjaher, Kamal
    Morio, Jerome
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 138
  • [30] HANDLING SEVERE DATA IMBALANCE IN CHEST X-RAY IMAGE CLASSIFICATION WITH TRANSFER LEARNING USING SWAV SELF-SUPERVISED PRE-TRAINING
    Muljo, Hery Harjono
    Pardamean, Bens
    Elwirehardja, Gregorius Natanael
    Hidayat, Alam Ahmad
    Sudigyo, Digdo
    Rahutomo, Reza
    Cenggoro, Tjeng Wawan
    COMMUNICATIONS IN MATHEMATICAL BIOLOGY AND NEUROSCIENCE, 2023,