Deep Learning for Automatically Visual Evoked Potential Classification During Surgical Decompression of Sellar Region Tumors

被引:16
|
作者
Qiao, Nidan [1 ,2 ]
Song, Mengju [3 ,4 ]
Ye, Zhao [1 ]
He, Wenqiang [1 ]
Ma, Zengyi [1 ]
Wang, Yongfei [1 ]
Zhang, Yuyan [3 ]
Shou, Xuefei [1 ]
机构
[1] Fudan Univ, Shanghai Neurosurg Res Inst, Shanghai Pituitary Tumor Ctr, Dept Neurosurg,Huashan Hosp,Shanghai Med Coll, Shanghai, Peoples R China
[2] Harvard Med Sch, Massachusetts Gen Hosp, Neuroendocrine Unit, Boston, MA 02115 USA
[3] Fudan Univ, Huashan Hosp, Shanghai Med Coll, Dept Ophthalmol, Shanghai, Peoples R China
[4] Putuo Oculopathy Dent Dis Prevent & Cure Clin, Shanghai, Peoples R China
来源
TRANSLATIONAL VISION SCIENCE & TECHNOLOGY | 2019年 / 8卷 / 06期
关键词
artificial intelligence; optic chiasm; intraoperative monitoring; neural network;
D O I
10.1167/tvst.8.6.21
中图分类号
R77 [眼科学];
学科分类号
100212 ;
摘要
Purpose: Detection of the huge amount of data generated in real-time visual evoked potential (VEP) requires labor-intensive work and experienced electrophysiologists. This study aims to build an automatic VEP classification system by using a deep learning algorithm. Methods: Patients with sellar region tumor and optic chiasm compression were enrolled. Flash VEP monitoring was applied during surgical decompression. Sequential VEP images were fed into three neural network algorithms to train VEP classification models. Results: We included 76 patients. During surgical decompression, we observed 68 eyes with increased VEP amplitude, 47 eyes with a transient decrease, and 37 eyes without change. We generated 2,843 sequences (39,802 images) in total (887 sequences with increasing VEP, 276 sequences with decreasing VEP, and 1680 sequences without change). The model combining convolutional and recurrent neural network had the highest accuracy (87.4%; 95% confidence interval, 84.2%-90.1%). The sensitivity of predicting no change VEP, increasing VEP, and decreasing VEP was 92.6%, 78.9%, and 83.7%, respectively. The specificity of predicting no change VEP, increasing VEP, and decreasing VEP was 80.5%, 93.3%, and 100.0%, respectively. The class activation map visualization technique showed that the P2-N3-P3 complex was important in determining the output. Conclusions: We identified three VEP responses (no change, increase, and decrease) during transsphenoidal surgical decompression of sellar region tumors. We developed a deep learning model to classify the sequential changes of intraoperative VEP.
引用
收藏
页数:7
相关论文
共 2 条
  • [1] Machine Learning Prediction of Visual Outcome after Surgical Decompression of Sellar Region Tumors
    Qiao, Nidan
    Ma, Yichen
    Chen, Xiaochen
    Ye, Zhao
    Ye, Hongying
    Zhang, Zhaoyun
    Wang, Yongfei
    Lu, Zhaozeng
    Wang, Zhiliang
    Xiao, Yiqin
    Zhao, Yao
    JOURNAL OF PERSONALIZED MEDICINE, 2022, 12 (02):
  • [2] Using Deep Learning for the Classification of Images Generated by Multifocal Visual Evoked Potential
    Qiao, Nidan
    FRONTIERS IN NEUROLOGY, 2018, 9