A Semi-Supervised Multi-Scale Arbitrary Dilated Convolution Neural Network for Pediatric Sleep Staging

被引:2
|
作者
Chen, Zhiqiang [1 ]
Pan, Xue [2 ]
Xu, Zhifei [3 ]
Li, Ke [4 ]
Lv, Yudan [2 ]
Zhang, Yuan [1 ]
Sun, Hongqiang [5 ]
机构
[1] Southwest Univ, Coll Elect & Informat Engn, Chongqing 400715, Peoples R China
[2] First Hosp Jilin Univ, Dept Neurol, Changchun 130015, Peoples R China
[3] Capital Med Univ, Dept Resp Med 1, Beijing Childrens Hosp, Natl Ctr Childrens Hlth, Beijing 100045, Peoples R China
[4] Shandong Univ, Intelligent Med Engn Res Ctr, Sch Control Sci & Engn, Lab Rehabil Engn, Jinan 250061, Peoples R China
[5] Peking Univ, Peking Univ Sixth Hosp, Inst Mental Hlth, NHC Key Lab Mental Hlth,Natl Clin Res Ctr Mental D, Beijing 100191, Peoples R China
基金
中国国家自然科学基金;
关键词
Pediatric sleep staging; Arbitrary dilation convolution; Signle-EEG; Semi-supervised learning; EEG; CLASSIFICATION;
D O I
10.1109/JBHI.2023.3330345
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Sleep staging is essential for assessing sleep quality and diagnosing sleep disorders. However, sleep staging is a labor-intensive process, making it arduous to obtain large quantities of high-quality labeled data for automatic sleep staging. Meanwhile, most of the research on automatic sleep staging pays little attention to pediatric sleep staging. To address these challenges, we propose a semi-supervised multi-scale arbitrary dilated convolution neural network (SMADNet) for pediatric sleep staging using the scalogram with a high height-to-width ratio generated by the continuous wavelet transform (CWT) as input. To extract more extended time dimensional feature representations and adapt to scalograms with a high height-to-width ratio in SMADNet, we introduce a multi-scale arbitrary dilation convolution block (MADBlock) based on our proposed arbitrary dilated convolution (ADConv). Finally, we also utilize semi-supervised learning as the training scheme for our network in order to alleviate the reliance on labeled data. Our proposed model has achieved performance comparable to state-of-the-art supervised learning methods with 30% labels. Our model is tested on a private pediatric dataset and achieved 79% accuracy, 72% kappa, and 75% MF1. Therefore, our model demonstrates a powerful feature extraction capability and has achieved performance comparable to state-of-the-art supervised learning methods with a small number of labels.
引用
收藏
页码:1043 / 1053
页数:11
相关论文
共 50 条
  • [1] A safe semi-supervised graph convolution network
    Yang, Zhi
    Yan, Yadong
    Gan, Haitao
    Zhao, Jing
    Ye, Zhiwei
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2022, 19 (12) : 12677 - 12692
  • [2] Multi-Scale Aggregation Graph Neural Networks Based on Feature Similarity for Semi-Supervised Learning
    Zhang, Xun
    Yang, Lanyan
    Zhang, Bin
    Liu, Ying
    Jiang, Dong
    Qin, Xiaohai
    Hao, Mengmeng
    ENTROPY, 2021, 23 (04)
  • [3] Semi-supervised multi-scale attention-aware graph convolution network for intelligent fault diagnosis of machine under extremely-limited labeled samples
    Xie, Zongliang
    Chen, Jinglong
    Feng, Yong
    He, Shuilong
    JOURNAL OF MANUFACTURING SYSTEMS, 2022, 64 : 561 - 577
  • [4] Semi-Supervised Training of Transformer and Causal Dilated Convolution Network with Applications to Speech Topic Classification
    Zeng, Jinxiang
    Zhang, Du
    Li, Zhiyi
    Li, Xiaolin
    APPLIED SCIENCES-BASEL, 2021, 11 (12):
  • [5] Multi-scale consistent self-training network for semi-supervised orbital tumor segmentation
    Wang, Keyi
    Jin, Kai
    Cheng, Zhiming
    Liu, Xindi
    Wang, Changjun
    Guan, Xiaojun
    Xu, Xiaojun
    Ye, Juan
    Wang, Wenyu
    Wang, Shuai
    MEDICAL PHYSICS, 2024, 51 (07) : 4859 - 4871
  • [6] Adversarial learning for semi-supervised pediatric sleep staging with single-EEG channel
    Li, Yamei
    Peng, Caijing
    Zhang, Yinkai
    Zhang, Yuan
    Lo, Benny
    METHODS, 2022, 204 : 84 - 91
  • [7] Semi-supervised t-SNE with multi-scale neighborhood preservation
    Serna-Serna, Walter
    de Bodt, Cyril
    Alvarez-Meza, Andres M.
    Lee, John A.
    Verleysen, Michel
    Orozco-Gutierrez, Alvaro A.
    NEUROCOMPUTING, 2023, 550
  • [8] Application of multi-scale information semi-supervised learning network in vibrating screen operational state recognition
    Wu, Yuxin
    Song, Yang
    Wang, Weidong
    Lv, Ziqi
    Zhang, Kanghui
    Zhao, Xuan
    Fan, Yuhan
    Cui, Yao
    MEASUREMENT, 2024, 238
  • [9] Multi-scale semi-supervised clustering of brain images: Deriving disease subtypes
    Wen, Junhao
    Varol, Erdem
    Sotiras, Aristeidis
    Yang, Zhijian
    Chand, Ganesh B.
    Erus, Guray
    Shou, Haochang
    Abdulkadir, Ahmed
    Hwang, Gyujoon
    Dwyer, Dominic B.
    Pigoni, Alessandro
    Dazzan, Paola
    Kahn, Rene S.
    Schnack, Hugo G.
    Zanetti, Marcus, V
    Meisenzahl, Eva
    Busatto, Geraldo F.
    Crespo-Facorro, Benedicto
    Rafael, Romero-Garcia
    Pantelis, Christos
    Wood, Stephen J.
    Zhuo, Chuanjun
    Shinohara, Russell T.
    Fan, Yong
    Gur, Ruben C.
    Gur, Raquel E.
    Satterthwaite, Theodore D.
    Koutsouleris, Nikolaos
    Wolf, Daniel H.
    Davatzikos, Christos
    MEDICAL IMAGE ANALYSIS, 2022, 75
  • [10] Multi-scale spatial consistency for deep semi-supervised skin lesion segmentation
    Nouboukpo, Adama
    Allaoui, Mohamed Lamine
    Allili, Mohand Said
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 135