Automated Koos Classification of Vestibular Schwannoma

被引:6
作者
Kujawa, Aaron [1 ]
Dorent, Reuben [1 ]
Connor, Steve [1 ,2 ,3 ]
Oviedova, Anna [4 ]
Okasha, Mohamed [4 ]
Grishchuk, Diana [5 ]
Ourselin, Sebastien [6 ]
Paddick, Ian [5 ]
Kitchen, Neil [5 ,7 ]
Vercauteren, Tom [1 ]
Shapey, Jonathan [1 ,4 ]
机构
[1] Kings Coll London, Sch Biomed Engn & Imaging Sci, London, England
[2] Kings Coll Hosp London, Dept Neuroradiol, London, England
[3] Guys Hosp, Dept Radiol, London, England
[4] Kings Coll Hosp London, Dept Neurosurg, London, England
[5] Natl Hosp Neurol & Neurosurg, Queen Sq Radiosurg Ctr Gamma Knife, London, England
[6] UCL, Ctr Intervent & Surg Sci, Wellcome Engn & Phys Sci Res Council EPSRC, London, England
[7] Natl Hosp Neurol & Neurosurg, Dept Neurosurg, London, England
来源
FRONTIERS IN RADIOLOGY | 2022年 / 2卷
基金
英国惠康基金;
关键词
vestibular schwannoma; classification; segmentation; deep learning; artificial intelligence; SEGMENTATION; SURVEILLANCE; MANAGEMENT;
D O I
10.3389/fradi.2022.837191
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Objective The Koos grading scale is a frequently used classification system for vestibular schwannoma (VS) that accounts for extrameatal tumor dimension and compression of the brain stem. We propose an artificial intelligence (AI) pipeline to fully automate the segmentation and Koos classification of VS from MRI to improve clinical workflow and facilitate patient management.Methods We propose a method for Koos classification that does not only rely on available images but also on automatically generated segmentations. Artificial neural networks were trained and tested based on manual tumor segmentations and ground truth Koos grades of contrast-enhanced T1-weighted (ceT1) and high-resolution T2-weighted (hrT2) MR images from subjects with a single sporadic VS, acquired on a single scanner and with a standardized protocol. The first stage of the pipeline comprises a convolutional neural network (CNN) which can segment the VS and 7 adjacent structures. For the second stage, we propose two complementary approaches that are combined in an ensemble. The first approach applies a second CNN to the segmentation output to predict the Koos grade, the other approach extracts handcrafted features which are passed to a Random Forest classifier. The pipeline results were compared to those achieved by two neurosurgeons.Results Eligible patients (n = 308) were pseudo-randomly split into 5 groups to evaluate the model performance with 5-fold cross-validation. The weighted macro-averaged mean absolute error (MA-MAE), weighted macro-averaged F1 score (F1), and accuracy score of the ensemble model were assessed on the testing sets as follows: MA-MAE = 0.11 +/- 0.05, F1 = 89.3 +/- 3.0%, accuracy = 89.3 +/- 2.9%, which was comparable to the average performance of two neurosurgeons: MA-MAE = 0.11 +/- 0.08, F1 = 89.1 +/- 5.2, accuracy = 88.6 +/- 5.8%. Inter-rater reliability was assessed by calculating Fleiss' generalized kappa (k = 0.68) based on all 308 cases, and intra-rater reliabilities of annotator 1 (k = 0.95) and annotator 2 (k = 0.82) were calculated according to the weighted kappa metric with quadratic (Fleiss-Cohen) weights based on 15 randomly selected cases.Conclusions We developed the first AI framework to automatically classify VS according to the Koos scale. The excellent results show that the accuracy of the framework is comparable to that of neurosurgeons and may therefore facilitate management of patients with VS. The models, code, and ground truth Koos grades for a subset of publicly available images (n = 188) will be released upon publication.
引用
收藏
页数:14
相关论文
共 41 条
[1]   Brain Tumor Classification Using Convolutional Neural Network [J].
Abiwinanda, Nyoman ;
Hanif, Muhammad ;
Hesaputra, S. Tafwida ;
Handayani, Astri ;
Mengko, Tati Rajab .
WORLD CONGRESS ON MEDICAL PHYSICS AND BIOMEDICAL ENGINEERING 2018, VOL 1, 2019, 68 (01) :183-189
[2]   Evaluation Measures for Ordinal Regression [J].
Baccianella, Stefano ;
Esuli, Andrea ;
Sebastiani, Fabrizio .
2009 9TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS DESIGN AND APPLICATIONS, 2009, :283-287
[3]   Classification of Brain Tumors from MRI Images Using a Convolutional Neural Network [J].
Badza, Milica M. ;
Barjaktarovic, Marko C. .
APPLIED SCIENCES-BASEL, 2020, 10 (06)
[4]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[5]   Geodesic Information Flows: Spatially-Variant Graphs and Their Application to Segmentation and Fusion [J].
Cardoso, M. Jorge ;
Modat, Marc ;
Wolz, Robin ;
Melbourne, Andrew ;
Cash, David ;
Rueckert, Daniel ;
Ourselin, Sebastien .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2015, 34 (09) :1976-1988
[6]   The Changing Landscape of Vestibular Schwannoma Management in the United StatesA Shift Toward Conservatism [J].
Carlson, Matthew L. ;
Habermann, Elizabeth B. ;
Wagie, Amy E. ;
Driscoll, Colin L. ;
Van Gompel, Jamie J. ;
Jacob, Jeffrey T. ;
Link, Michael J. .
OTOLARYNGOLOGY-HEAD AND NECK SURGERY, 2015, 153 (03) :440-446
[7]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[8]   MRI surveillance of vestibular schwannomas without contrast enhancement: Clinical and economic evaluation [J].
Coelho, Daniel H. ;
Tang, Yang ;
Suddarth, Brian ;
Mamdani, Mohammed .
LARYNGOSCOPE, 2018, 128 (01) :202-209
[9]   The Importance of Skip Connections in Biomedical Image Segmentation [J].
Drozdzal, Michal ;
Vorontsov, Eugene ;
Chartrand, Gabriel ;
Kadoury, Samuel ;
Pal, Chris .
DEEP LEARNING AND DATA LABELING FOR MEDICAL APPLICATIONS, 2016, 10008 :179-187
[10]   Koos Classification of Vestibular Schwannomas: A Reliability Study [J].
Erickson, Nicholas J. ;
Schmalz, Philip G. R. ;
Agee, Bonita S. ;
Fort, Matthew ;
Walters, Beverly C. ;
McGrew, Benjamin M. ;
Fisher, Winfield S., III .
NEUROSURGERY, 2019, 85 (03) :409-413