COVID-19 Infection Segmentation and Severity Assessment Using a Self-Supervised Learning Approach

被引:8
作者
Song, Yao [1 ,2 ]
Liu, Jun [1 ,2 ]
Liu, Xinghua [3 ]
Tang, Jinshan [4 ]
机构
[1] Wuhan Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan 430065, Peoples R China
[2] Hubei Prov Key Lab Intelligent Informat Proc & Re, Wuhan 430065, Peoples R China
[3] Wuhan First Hosp, Wuhan 430030, Peoples R China
[4] George Mason Univ, Coll Hlth & Human Serv, Dept Hlth Adm & Policy, Fairfax, VA 22030 USA
关键词
self-supervised learning; COVID-19; lesion segmentation; SYSTEM;
D O I
10.3390/diagnostics12081805
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background: Automated segmentation of COVID-19 infection lesions and the assessment of the severity of the infections are critical in COVID-19 diagnosis and treatment. Based on a large amount of annotated data, deep learning approaches have been widely used in COVID-19 medical image analysis. However, the number of medical image samples is generally huge, and it is challenging to obtain enough annotated medical images for training a deep CNN model. Methods: To address these challenges, we propose a novel self-supervised deep learning method for automated segmentation of COVID-19 infection lesions and assessing the severity of infection, which can reduce the dependence on the annotation of the training samples. In the proposed method, first, many unlabeled data are used to pre-train an encoder-decoder model to learn rotation-dependent and rotation-invariant features. Then, a small amount of labeled data is used to fine-tune the pre-trained encoder-decoder for COVID-19 severity classification and lesion segmentation. Results: The proposed methods were tested on two public COVID-19 CT datasets and one self-built dataset. Accuracy, precision, recall, and Fl-score were used to measure classification performance and Dice coefficient was used to measure segmentation performance. For COVID-19 severity classification, the proposed method outperformed other unsupervised feature learning methods by about 7.16% in accuracy. For segmentation, when the amount of labeled data was 100%, the Dice value of the proposed method was 5.58% higher than that of U-Net.; in 70% of the cases, our method was 8.02% higher than U-Net; in 30% of the cases, our method was 11.88% higher than U-Net; and in 10% of the cases, our method was 16.88% higher than U-Net. Conclusions: The proposed method provides better classification and segmentation performance under limited labeled data than other methods.
引用
收藏
页数:17
相关论文
共 55 条
[11]   Self-supervised Object Motion and Depth Estimation from Video [J].
Dai, Qi ;
Patii, Vaishakh ;
Hecker, Simon ;
Dai, Dengxin ;
Van Gool, Luc ;
Schindler, Konrad .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, :4326-4334
[12]   Unsupervised Visual Representation Learning by Context Prediction [J].
Doersch, Carl ;
Gupta, Abhinav ;
Efros, Alexei A. .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1422-1430
[13]   Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks [J].
Dosovitskiy, Alexey ;
Fischer, Philipp ;
Springenberg, Jost Tobias ;
Riedmiller, Martin ;
Brox, Thomas .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (09) :1734-1747
[14]   Self-Supervised Representation Learning by Rotation Feature Decoupling [J].
Feng, Zeyu ;
Xu, Chang ;
Tao, Dacheng .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :10356-10366
[15]  
Gidaris S., 2018, ARXIV
[16]   CT-Based COVID-19 triage: Deep multitask learning improves joint identification and severity quantification [J].
Goncharov, Mikhail ;
Pisov, Maxim ;
Shevtsov, Alexey ;
Shirokikh, Boris ;
Kurmukov, Anvar ;
Blokhin, Ivan ;
Chernina, Valeria ;
Solovev, Alexander ;
Gombolevskiy, Victor ;
Morozov, Sergey ;
Belyaev, Mikhail .
MEDICAL IMAGE ANALYSIS, 2021, 71
[17]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672
[18]   Rethinking ImageNet Pre-training [J].
He, Kaiming ;
Girshick, Ross ;
Dollar, Piotr .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :4917-4926
[19]   Momentum Contrast for Unsupervised Visual Representation Learning [J].
He, Kaiming ;
Fan, Haoqi ;
Wu, Yuxin ;
Xie, Saining ;
Girshick, Ross .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :9726-9735
[20]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778