A Residual Based Attention Model for EEG Based Sleep Staging

被引:117
作者
Qu, Wei [1 ]
Wang, Zhiyong [1 ]
Hong, Hong [2 ]
Chi, Zheru [3 ,4 ]
Feng, David Dagan [1 ]
Grunstein, Ron [5 ,6 ]
Gordon, Christopher [7 ,8 ,9 ]
机构
[1] Univ Sydney, Sch Comp Sci, Sydney, NSW 2006, Australia
[2] Nanjing Univ Sci & Technol, Sch Elect & Opt Engn, Nanjing 210094, Peoples R China
[3] Hong Kong Polytech Univ, Dept Elect & Informat Engn, Hong Kong, Peoples R China
[4] Hong Kong Polytech Univ, Shenzhen Res Inst, Hong Kong, Peoples R China
[5] Univ Sydney, Woolcock Inst Med Res, Ctr Sleep & Chronobiol CIRUS, Sydney, NSW 2006, Australia
[6] Royal Prince Alfred Hosp, Dept Resp & Sleep Med, Sydney, NSW 2050, Australia
[7] CRC Alertness Safety & Prod, Melbourne, Vic 3000, Australia
[8] Woolcock Inst Med Res, Ctr Sleep & Chronobiol, Glebe, NSW 2037, Australia
[9] Univ Sydney, Fac Med & Hlth, Susan Wakil Sch Nursing & Midwifery, Sydney, NSW 2006, Australia
关键词
Sleep; Electroencephalography; Feature extraction; Brain modeling; Machine learning; Training; Context modeling; Sleep staging; deep learning; EEG signal; Hilbert transform; attention model; NEURAL-NETWORK; CLASSIFICATION;
D O I
10.1109/JBHI.2020.2978004
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Sleep staging is to score the sleep state of a subject into different sleep stages such as Wake and Rapid Eye Movement (REM). It plays an indispensable role in the diagnosis and treatment of sleep disorders. As manual sleep staging through well-trained sleep experts is time consuming, tedious, and subjective, many automatic methods have been developed for accurate, efficient, and objective sleep staging. Recently, deep learning based methods have been successfully proposed for electroencephalogram (EEG) based sleep staging with promising results. However, most of these methods directly take EEG raw signals as input of convolutional neural networks (CNNs) without considering the domain knowledge of EEG staging. Apart from that, to capture temporal information, most of the existing methods utilize recurrent neural networks such as LSTM (Long Short Term Memory) which are not effective for modelling global temporal context and difficult to train. Therefore, inspired by the clinical guidelines of sleep staging such as AASM (American Academy of Sleep Medicine) rules where different stages are generally characterized by EEG waveforms of various frequencies, we propose a multi-scale deep architecture by decomposing an EEG signal into different frequency bands as input to CNNs. To model global temporal context, we utilize the multi-head self-attention module of the transformer model to not only improve performance, but also shorten the training time. In addition, we choose residual based architecture which makes training end-to-end. Experimental results on two widely used sleep staging datasets, Montreal Archive of Sleep Studies (MASS) and sleep-EDF datasets, demonstrate the effectiveness and significant efficiency (up to 12 times less training time) of our proposed method over the state-of-the-art.
引用
收藏
页码:2833 / 2843
页数:11
相关论文
共 44 条
[1]   Ensemble SVM Method for Automatic Sleep Stage Classification [J].
Alickovic, Emina ;
Subasi, Abdulhamit .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2018, 67 (06) :1258-1265
[2]  
Aljalbout E, 2018, Clustering with deep learning: Taxonomy and new methods
[3]  
[Anonymous], 2007, The AASM Manual for the Scoring of Sleep and Associated Events: Rules, Terminology and Technical Specifications
[4]  
[Anonymous], 2015, TENSOR
[5]  
[Anonymous], 2015, Tiny ImageNet Visual Recognition Challenge., DOI DOI 10.1109/ICCV.2015.123
[6]  
[Anonymous], 2018, IEEE INT CONF BIG DA
[7]  
[Anonymous], 2016, J. Mach. Learn. Res.
[8]   PRODUCT THEOREM FOR HILBERT TRANSFORMS [J].
BEDROSIAN, E .
PROCEEDINGS OF THE IEEE, 1963, 51 (05) :868-&
[9]   Multitask learning [J].
Caruana, R .
MACHINE LEARNING, 1997, 28 (01) :41-75
[10]  
Chambon S., 2018, IEEE T NEUR SYS REH, V26, P758, DOI [DOI 10.1109/TNSRE.2018.2813138, 10.1109/TNSRE.2018.2813138]