A Semi-Supervised Multi-Scale Arbitrary Dilated Convolution Neural Network for Pediatric Sleep Staging

被引:2
|
作者
Chen, Zhiqiang [1 ]
Pan, Xue [2 ]
Xu, Zhifei [3 ]
Li, Ke [4 ]
Lv, Yudan [2 ]
Zhang, Yuan [1 ]
Sun, Hongqiang [5 ]
机构
[1] Southwest Univ, Coll Elect & Informat Engn, Chongqing 400715, Peoples R China
[2] First Hosp Jilin Univ, Dept Neurol, Changchun 130015, Peoples R China
[3] Capital Med Univ, Dept Resp Med 1, Beijing Childrens Hosp, Natl Ctr Childrens Hlth, Beijing 100045, Peoples R China
[4] Shandong Univ, Intelligent Med Engn Res Ctr, Sch Control Sci & Engn, Lab Rehabil Engn, Jinan 250061, Peoples R China
[5] Peking Univ, Peking Univ Sixth Hosp, Inst Mental Hlth, NHC Key Lab Mental Hlth,Natl Clin Res Ctr Mental D, Beijing 100191, Peoples R China
基金
中国国家自然科学基金;
关键词
Pediatric sleep staging; Arbitrary dilation convolution; Signle-EEG; Semi-supervised learning; EEG; CLASSIFICATION;
D O I
10.1109/JBHI.2023.3330345
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Sleep staging is essential for assessing sleep quality and diagnosing sleep disorders. However, sleep staging is a labor-intensive process, making it arduous to obtain large quantities of high-quality labeled data for automatic sleep staging. Meanwhile, most of the research on automatic sleep staging pays little attention to pediatric sleep staging. To address these challenges, we propose a semi-supervised multi-scale arbitrary dilated convolution neural network (SMADNet) for pediatric sleep staging using the scalogram with a high height-to-width ratio generated by the continuous wavelet transform (CWT) as input. To extract more extended time dimensional feature representations and adapt to scalograms with a high height-to-width ratio in SMADNet, we introduce a multi-scale arbitrary dilation convolution block (MADBlock) based on our proposed arbitrary dilated convolution (ADConv). Finally, we also utilize semi-supervised learning as the training scheme for our network in order to alleviate the reliance on labeled data. Our proposed model has achieved performance comparable to state-of-the-art supervised learning methods with 30% labels. Our model is tested on a private pediatric dataset and achieved 79% accuracy, 72% kappa, and 75% MF1. Therefore, our model demonstrates a powerful feature extraction capability and has achieved performance comparable to state-of-the-art supervised learning methods with a small number of labels.
引用
收藏
页码:1043 / 1053
页数:11
相关论文
共 50 条
  • [21] Centroid Neural Network with Pairwise Constraints for Semi-supervised Learning
    Minh Tran Ngoc
    Park, Dong-Chul
    NEURAL PROCESSING LETTERS, 2018, 48 (03) : 1721 - 1747
  • [22] SEMI-SUPERVISED MEDICAL IMAGE SEMANTIC SEGMENTATION WITH MULTI-SCALE GRAPH CUT LOSS
    Sun, Junxiao
    Zhang, Yan
    Zhu, Jian
    Wu, Jiasong
    Kong, Youyong
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 624 - 628
  • [23] Semi-supervised learning on network using structure features and graph convolution
    Tachibana M.
    Murata T.
    Transactions of the Japanese Society for Artificial Intelligence, 2019, 34 (05):
  • [24] Noise suppression method based on multi-scale Dilated Convolution Network in desert seismic data
    Li, Yue
    Wang, Yuying
    Wu, Ning
    COMPUTERS & GEOSCIENCES, 2021, 156 (156)
  • [25] MRNet: a Multi-scale Residual Network for EEG-based Sleep Staging
    Xue Jiang
    Zhao, Jianhui
    Du Bo
    An Panfeng
    Guo, Haowen
    Yuan, Zhiyong
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [26] A multi-scale recurrent fully convolution neural network for laryngeal leukoplakia segmentation
    Ji, Bin
    Ren, Jianjun
    Zheng, Xiujuan
    Tan, Cong
    Ji, Rong
    Zhao, Yu
    Liu, Kai
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2020, 59
  • [27] Semi-Supervised Object Detection with Multi-Scale Regularization and Bounding Box Re-Prediction
    Shao, Yeqin
    Lv, Chang
    Zhang, Ruowei
    Yin, He
    Che, Meiqin
    Yang, Guoqing
    Jiang, Quan
    ELECTRONICS, 2024, 13 (01)
  • [28] Optimizing underwater image enhancement: integrating semi-supervised learning and multi-scale aggregated attention
    Xu, Sunhan
    Wang, Jinhua
    He, Ning
    Xu, Guangmei
    Zhang, Geng
    VISUAL COMPUTER, 2024, : 3437 - 3455
  • [29] Curriculum Consistency Learning and Multi-Scale Contrastive Constraint in Semi-Supervised Medical Image Segmentation
    Ding, Weizhen
    Li, Zhen
    BIOENGINEERING-BASEL, 2024, 11 (01):
  • [30] Multi-Scale Recursive Semi-Supervised Deep Learning Fault Diagnosis Method with Attention Gate
    Tang, Shanjie
    Wang, Chaoge
    Zhou, Funa
    Hu, Xiong
    Wang, Tianzhen
    MACHINES, 2023, 11 (02)