Evaluating the relationship between magnetic resonance image quality metrics and deep learning-based segmentation accuracy of brain tumors

被引:3
作者
Muthusivarajan, Rajarajeswari [1 ]
Celaya, Adrian [1 ,2 ]
Yung, Joshua P. [1 ]
Long, James P. [3 ]
Viswanath, Satish E. [4 ]
Marcus, Daniel S. [5 ]
Chung, Caroline [6 ]
Fuentes, David [1 ]
机构
[1] Univ Texas MD Anderson Canc Ctr, Dept Imaging Phys, Houston, TX 77030 USA
[2] Rice Univ, Dept Computat & Appl Math, Houston, TX USA
[3] Univ Texas MD Anderson Canc Ctr, Dept Biostat, Houston, TX 77030 USA
[4] Case Western Reserve Univ, Dept Biomed Engn, Cleveland, OH USA
[5] Washington Univ, Dept Radiol, Sch Med, St Louis, MO USA
[6] Univ Texas MD Anderson Canc Ctr, Dept Radiat Oncol, Houston, TX 77030 USA
基金
美国国家科学基金会;
关键词
AI; brain tumor; deep learning; image quality; segmentation;
D O I
10.1002/mp.17059
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
BackgroundMagnetic resonance imaging (MRI) scans are known to suffer from a variety of acquisition artifacts as well as equipment-based variations that impact image appearance and segmentation performance. It is still unclear whether a direct relationship exists between magnetic resonance (MR) image quality metrics (IQMs) (e.g., signal-to-noise, contrast-to-noise) and segmentation accuracy.PurposeDeep learning (DL) approaches have shown significant promise for automated segmentation of brain tumors on MRI but depend on the quality of input training images. We sought to evaluate the relationship between IQMs of input training images and DL-based brain tumor segmentation accuracy toward developing more generalizable models for multi-institutional data.MethodsWe trained a 3D DenseNet model on the BraTS 2020 cohorts for segmentation of tumor subregions enhancing tumor (ET), peritumoral edematous, and necrotic and non-ET on MRI; with performance quantified via a 5-fold cross-validated Dice coefficient. MRI scans were evaluated through the open-source quality control tool MRQy, to yield 13 IQMs per scan. The Pearson correlation coefficient was computed between whole tumor (WT) dice values and IQM measures in the training cohorts to identify quality measures most correlated with segmentation performance. Each selected IQM was used to group MRI scans as "better" quality (BQ) or "worse" quality (WQ), via relative thresholding. Segmentation performance was re-evaluated for the DenseNet model when (i) training on BQ MRI images with validation on WQ images, as well as (ii) training on WQ images, and validation on BQ images. Trends were further validated on independent test sets derived from the BraTS 2021 training cohorts.ResultsFor this study, multimodal MRI scans from the BraTS 2020 training cohorts were used to train the segmentation model and validated on independent test sets derived from the BraTS 2021 cohort. Among the selected IQMs, models trained on BQ images based on inhomogeneity measurements (coefficient of variance, coefficient of joint variation, coefficient of variation of the foreground patch) and the models trained on WQ images based on noise measurement peak signal-to-noise ratio (SNR) yielded significantly improved tumor segmentation accuracy compared to their inverse models.ConclusionsOur results suggest that a significant correlation may exist between specific MR IQMs and DenseNet-based brain tumor segmentation performance. The selection of MRI scans for model training based on IQMs may yield more accurate and generalizable models in unseen validation.
引用
收藏
页码:4898 / 4906
页数:9
相关论文
共 29 条
[1]  
Baid Ujjwal, 2021, The RSNA-ASNR-MICCAI BraTS 2021 benchmark on brain tumor segmentation and radiogenomic classification
[2]   MRI Tumor Segmentation with Densely Connected 3D CNN [J].
Chen, Lele ;
Wu, Yue ;
DSouza, Adora M. ;
Abidin, Anas Z. ;
Wismueller, Axel ;
Xu, Chenliang .
MEDICAL IMAGING 2018: IMAGE PROCESSING, 2018, 10574
[3]  
Cicek O., 2016, Medical Image Computing and Computer-Assisted Intervention, V9901, P424, DOI DOI 10.1007/978-3-319-46723-8_49
[4]  
Dodge S, 2016, 2016 EIGHTH INTERNATIONAL CONFERENCE ON QUALITY OF MULTIMEDIA EXPERIENCE (QOMEX)
[5]   HyperDense-Net: A Hyper-Densely Connected CNN for Multi-Modal Image Segmentation [J].
Dolz, Jose ;
Gopinath, Karthik ;
Yuan, Jing ;
Lombaert, Herve ;
Desrosiers, Christian ;
Ben Ayed, Ismail .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2019, 38 (05) :1116-1126
[6]   Meta-analysis based SVM classification enables accurate detection of Alzheimer's disease across different clinical centers using FDG-PET and MRI [J].
Dukart, Juergen ;
Mueller, Karsten ;
Barthel, Henryk ;
Villringer, Arno ;
Sabri, Osama ;
Schroeter, Matthias Leopold .
PSYCHIATRY RESEARCH-NEUROIMAGING, 2013, 212 (03) :230-236
[7]   MRIQC: Advancing the automatic prediction of image quality in MRI from unseen sites [J].
Esteban, Oscar ;
Birman, Daniel ;
Schaer, Marie ;
Koyejo, Oluwasanmi O. ;
Poldrack, Russell A. ;
Gorgolewski, Krzysztof J. .
PLOS ONE, 2017, 12 (09)
[8]   Inter-site Variability in Prostate Segmentation Accuracy Using Deep Learning [J].
Gibson, Eli ;
Hu, Yipeng ;
Ghavami, Nooshin ;
Ahmed, Hashim U. ;
Moore, Caroline ;
Emberton, Mark ;
Huisman, Henkjan J. ;
Barratt, Dean C. .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2018, PT IV, 2018, 11073 :506-514
[9]   Dual-pathway DenseNetswith fully lateral connections for multimodal brain tumor segmentation [J].
Hu, Jingyu ;
Gu, Xiaojing ;
Gu, Xingsheng .
INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2021, 31 (01) :364-378
[10]   Densely Connected Convolutional Networks [J].
Huang, Gao ;
Liu, Zhuang ;
van der Maaten, Laurens ;
Weinberger, Kilian Q. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2261-2269