Automated Koos Classification of Vestibular Schwannoma

被引:6
作者
Kujawa, Aaron [1 ]
Dorent, Reuben [1 ]
Connor, Steve [1 ,2 ,3 ]
Oviedova, Anna [4 ]
Okasha, Mohamed [4 ]
Grishchuk, Diana [5 ]
Ourselin, Sebastien [6 ]
Paddick, Ian [5 ]
Kitchen, Neil [5 ,7 ]
Vercauteren, Tom [1 ]
Shapey, Jonathan [1 ,4 ]
机构
[1] Kings Coll London, Sch Biomed Engn & Imaging Sci, London, England
[2] Kings Coll Hosp London, Dept Neuroradiol, London, England
[3] Guys Hosp, Dept Radiol, London, England
[4] Kings Coll Hosp London, Dept Neurosurg, London, England
[5] Natl Hosp Neurol & Neurosurg, Queen Sq Radiosurg Ctr Gamma Knife, London, England
[6] UCL, Ctr Intervent & Surg Sci, Wellcome Engn & Phys Sci Res Council EPSRC, London, England
[7] Natl Hosp Neurol & Neurosurg, Dept Neurosurg, London, England
来源
FRONTIERS IN RADIOLOGY | 2022年 / 2卷
基金
英国惠康基金;
关键词
vestibular schwannoma; classification; segmentation; deep learning; artificial intelligence; SEGMENTATION; SURVEILLANCE; MANAGEMENT;
D O I
10.3389/fradi.2022.837191
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Objective The Koos grading scale is a frequently used classification system for vestibular schwannoma (VS) that accounts for extrameatal tumor dimension and compression of the brain stem. We propose an artificial intelligence (AI) pipeline to fully automate the segmentation and Koos classification of VS from MRI to improve clinical workflow and facilitate patient management.Methods We propose a method for Koos classification that does not only rely on available images but also on automatically generated segmentations. Artificial neural networks were trained and tested based on manual tumor segmentations and ground truth Koos grades of contrast-enhanced T1-weighted (ceT1) and high-resolution T2-weighted (hrT2) MR images from subjects with a single sporadic VS, acquired on a single scanner and with a standardized protocol. The first stage of the pipeline comprises a convolutional neural network (CNN) which can segment the VS and 7 adjacent structures. For the second stage, we propose two complementary approaches that are combined in an ensemble. The first approach applies a second CNN to the segmentation output to predict the Koos grade, the other approach extracts handcrafted features which are passed to a Random Forest classifier. The pipeline results were compared to those achieved by two neurosurgeons.Results Eligible patients (n = 308) were pseudo-randomly split into 5 groups to evaluate the model performance with 5-fold cross-validation. The weighted macro-averaged mean absolute error (MA-MAE), weighted macro-averaged F1 score (F1), and accuracy score of the ensemble model were assessed on the testing sets as follows: MA-MAE = 0.11 +/- 0.05, F1 = 89.3 +/- 3.0%, accuracy = 89.3 +/- 2.9%, which was comparable to the average performance of two neurosurgeons: MA-MAE = 0.11 +/- 0.08, F1 = 89.1 +/- 5.2, accuracy = 88.6 +/- 5.8%. Inter-rater reliability was assessed by calculating Fleiss' generalized kappa (k = 0.68) based on all 308 cases, and intra-rater reliabilities of annotator 1 (k = 0.95) and annotator 2 (k = 0.82) were calculated according to the weighted kappa metric with quadratic (Fleiss-Cohen) weights based on 15 randomly selected cases.Conclusions We developed the first AI framework to automatically classify VS according to the Koos scale. The excellent results show that the accuracy of the framework is comparable to that of neurosurgeons and may therefore facilitate management of patients with VS. The models, code, and ground truth Koos grades for a subset of publicly available images (n = 188) will be released upon publication.
引用
收藏
页数:14
相关论文
共 41 条
[21]  
Minz A, 2017, IEEE INT ADV COMPUT, P701, DOI [10.1109/IACC.2017.0146, 10.1109/IACC.2017.137]
[22]   Inference for the generalization error [J].
Nadeau, C ;
Bengio, Y .
MACHINE LEARNING, 2003, 52 (03) :239-281
[23]   Diagnostic Accuracy of the Constructive Interference in Steady State Sequence Alone for Follow-Up Imaging of Vestibular Schwannomas [J].
Ozgen, B. ;
Oguz, B. ;
Dolgun, A. .
AMERICAN JOURNAL OF NEURORADIOLOGY, 2009, 30 (05) :985-991
[24]  
Paszke A, 2019, ADV NEUR IN, V32
[25]  
Pedregosa F, 2011, J MACH LEARN RES, V12, P2825
[26]   Context aware deep learning for brain tumor segmentation, subtype classification, and survival prediction using radiology images [J].
Pei, Linmin ;
Vidyaratne, Lasitha ;
Rahman, Md Monibor ;
Iftekharuddin, Khan M. .
SCIENTIFIC REPORTS, 2020, 10 (01)
[27]   A Deep Learning-Based Framework for Automatic Brain Tumors Classification Using Transfer Learning [J].
Rehman, Arshia ;
Naz, Saeeda ;
Razzak, Muhammad Imran ;
Akram, Faiza ;
Imran, Muhammad .
CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2020, 39 (02) :757-775
[28]   U-Net: Convolutional Networks for Biomedical Image Segmentation [J].
Ronneberger, Olaf ;
Fischer, Philipp ;
Brox, Thomas .
MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION, PT III, 2015, 9351 :234-241
[29]  
Sarkar A, 2020, 2020 INT C EMERG TEC, V2, P9, DOI [10.1109/INCET49848.2020.9154157, DOI 10.1109/INCET49848.2020.9154157]
[30]   A standardised pathway for the surveillance of stable vestibular schwannoma [J].
Shapey, J. ;
Barkas, K. ;
Connor, S. ;
Hitchings, A. ;
Cheetham, H. ;
Thomson, S. ;
U-King-Im, J. M. ;
Beaney, R. ;
Jiang, D. ;
Barazi, S. ;
Obholzer, R. ;
Thomas, N. W. M. .
ANNALS OF THE ROYAL COLLEGE OF SURGEONS OF ENGLAND, 2018, 100 (03) :216-220