Self-Training with Entropy-Based Mixup for Low-Resource Chest X-ray Classification

被引:0
|
作者
Park, Minkyu [1 ]
Kim, Juntae [1 ]
机构
[1] Dongguk Univ, Dept Comp Sci & Engn, 30,Pildong Ro 1-Gil, Seoul 04620, South Korea
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 12期
基金
新加坡国家研究基金会;
关键词
chest X-ray classification; data augmentation; self-training; Mixup;
D O I
10.3390/app13127198
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Deep learning-based medical image analysis technology has been developed to the extent that it shows an accuracy surpassing the ability of a human radiologist in some tasks. However, data labeling on medical images requires human experts and a great deal of time and expense. Moreover, medical image data usually have an imbalanced distribution for each disease. In particular, in multilabel classification, learning with a small number of labeled data causes overfitting problems. The model easily overfits the limited number of labeled data, while it still underfits the large amount of unlabeled data. In this study, we propose a method that combines entropy-based Mixup and self-training to improve the performance of data-imbalanced chest X-ray classification. The proposed method is to apply the Mixup algorithm to limited labeled data to alleviate the data imbalance problem and perform self-training that effectively utilizes the unlabeled data while iterating this process by replacing the teacher model with the student model. Experimental results in an environment with a limited number of labeled data and a large number of unlabeled data showed that the classification performance was improved by combining entropy-based Mixup and self-training.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Self-Training with Improved Regularization for Sample-Efficient Chest X-Ray Classification
    Rajan, Deepta
    Thiagarajan, Jayaraman J.
    Karargyris, Alexandros
    Kashyap, Satyananda
    MEDICAL IMAGING 2021: COMPUTER-AIDED DIAGNOSIS, 2021, 11597
  • [2] Analysis of Low-Resource Acoustic Model Self-Training
    Novotney, Scott
    Schwartz, Richard
    INTERSPEECH 2009: 10TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2009, VOLS 1-5, 2009, : 236 - 239
  • [3] Continual Domain Incremental Learning for Chest X-Ray Classification in Low-Resource Clinical Settings
    Srivastava, Shikhar
    Yaqub, Mohammad
    Nandakumar, Karthik
    Ge, Zongyuan
    Mahapatra, Dwarikanath
    DOMAIN ADAPTATION AND REPRESENTATION TRANSFER, AND AFFORDABLE HEALTHCARE AND AI FOR RESOURCE DIVERSE GLOBAL HEALTH (DART 2021), 2021, 12968 : 226 - 238
  • [4] A Novel Self-training Approach for Low-resource Speech Recognition
    Singh, Satwinder
    Hou, Feng
    Wang, Ruili
    INTERSPEECH 2023, 2023, : 1588 - 1592
  • [5] Low-Resource Mandarin Prosodic Structure Prediction Using Self-Training
    Wang, Xingrui
    Zhang, Bowen
    Shinozaki, Takahiro
    2021 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2021, : 859 - 863
  • [6] Self-Training With Double Selectors for Low-Resource Named Entity Recognition
    Fu, Yingwen
    Lin, Nankai
    Yu, Xiaohui
    Jiang, Shengyi
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 1265 - 1275
  • [7] Cross-Lingual Summarization Method Based on Joint Training and Self-Training in Low-Resource Scenarios
    Cheng, Shaohuan
    Tang, Yujia
    Liu, Qiao
    Chen, Wenyu
    Dianzi Keji Daxue Xuebao/Journal of the University of Electronic Science and Technology of China, 2024, 53 (05): : 762 - 770
  • [8] Uncertainty-Aware Self-training for Low-Resource Neural Sequence Labeling
    Wang, Jianing
    Wang, Chengyu
    Huang, Jun
    Gao, Ming
    Zhou, Aoying
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11, 2023, : 13682 - 13690
  • [9] Cross-Lingual Self-training to Learn Multilingual Representation for Low-Resource Speech Recognition
    Zi-Qiang Zhang
    Yan Song
    Ming-Hui Wu
    Xin Fang
    Ian McLoughlin
    Li-Rong Dai
    Circuits, Systems, and Signal Processing, 2022, 41 : 6827 - 6843
  • [10] State Value Generation with Prompt Learning and Self-Training for Low-Resource Dialogue State Tracking
    Gu, Ming
    Yang, Yan
    Chen, Chengcai
    Yu, Zhou
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 222, 2023, 222