Visual image reconstruction based on EEG signals using a generative adversarial and deep fuzzy neural network

被引:2
作者
Ahmadieh, Hajar [1 ]
Gassemi, Farnaz [1 ]
Moradi, Mohammad Hasan [1 ]
机构
[1] Amirkabir Univ Technol, Dept Biomed Engn, Tehran, Iran
关键词
Visual Image Reconstruction; Long -Short -Term Memory (LSTM); Fuzzy Regression; Generative Adversarial Networks (GAN); Electroencephalogram (EEG) Signal; BRAIN ACTIVITY;
D O I
10.1016/j.bspc.2023.105497
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
The decoding of the brain stimulated by visual images has been discussed in this study in order to classify and reconstruct the stimulating images. The dataset utilized was given by Stanford University and included the EEG signal recorded from ten healthy people. In the first stage, the proposed approach combines the ability of the Long-Short-Term Memory (LSTM) network to extract sequence (EEG signals) features with the ability of fuzzy network to extract features in high-uncertainty contexts. The features extracted in the previous stage, which is a noise-free and compressed form of the EEG data, were employed as a conditional vector in the CGAN network in the following step. As a result, this network was used to reconstruct visual stimuli that evoke specific brain responses. When compared to earlier research, the evaluation results produced utilizing the suggested approach demonstrate a greater classification accuracy, precision, recall, and F1 score. Also, the similarity of produced and real images was measured quantitatively using two parameters, FID and SSIM; the SSIM results for 10 participants were on average 0.37, 0.35, 0.35, 0.34, 0.42, and 0.42 for groups (1) to (6), and the FID results were 0.44, 0.45, 0.51, 0.50, 0.39 and 0.39, respectively. Therefore, it can be stated that the method suggested in this study can reconstruct visual stimuli using brain response (EEG signal) with acceptable accuracy, so taking a significant step towards the objective of mind reading.
引用
收藏
页数:25
相关论文
共 65 条
[1]   Automated EEG-based screening of depression using deep convolutional neural network [J].
Acharya, U. Rajendra ;
Oh, Shu Lih ;
Hagiwara, Yuki ;
Tan, Jen Hong ;
Adeli, Hojjat ;
Subha, D. P. .
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2018, 161 :103-113
[2]   Adding Tactile Feedback and Changing ISI to Improve BCI Systems' Robustness: An Error-Related Potential Study [J].
Ahkami, Bahareh ;
Ghassemi, Farnaz .
BRAIN TOPOGRAPHY, 2021, 34 (04) :467-477
[3]   A hybrid deep learning framework for automated visual image classification using EEG signals [J].
Ahmadieh, Hajar ;
Gassemi, Farnaz ;
Moradi, Mohammad Hasan .
NEURAL COMPUTING & APPLICATIONS, 2023, 35 (28) :20989-21005
[4]  
Aila Timo, A style-based generator architecture for generative adversar
[5]   Speech synthesis from neural decoding of spoken sentences [J].
Anumanchipalli, Gopala K. ;
Chartier, Josh ;
Chang, Edward F. .
NATURE, 2019, 568 (7753) :493-+
[6]  
Bozal Chaves A., 2017, PERSONALIZED IMAGE C
[7]   Transfer learning with deep convolutional neural network for liver steatosis assessment in ultrasound images [J].
Byra, Michal ;
Styczynski, Grzegorz ;
Szmigielski, Cezary ;
Kalinowski, Piotr ;
Michalowski, Lukasz ;
Paluszkiewicz, Rafal ;
Ziarkiewicz-Wroblewska, Bogna ;
Zieniewicz, Krzysztof ;
Sobieraj, Piotr ;
Nowicki, Andrzej .
INTERNATIONAL JOURNAL OF COMPUTER ASSISTED RADIOLOGY AND SURGERY, 2018, 13 (12) :1895-1903
[8]  
Chen X., 2016, Adv Neural Inf Process Sys, V29
[9]   Deep learning for electroencephalogram (EEG) classification tasks: a review [J].
Craik, Alexander ;
He, Yongtian ;
Contreras-Vidal, Jose L. .
JOURNAL OF NEURAL ENGINEERING, 2019, 16 (03)
[10]  
Du CD, 2017, IEEE IJCNN, P1049, DOI 10.1109/IJCNN.2017.7965968