An examination of EEG frequency components related to speech imagery and its identification

被引:0
作者
Tsukahara A. [1 ]
Yamada M. [1 ,2 ]
Tanaka K. [1 ,2 ]
Uchikawa Y. [1 ,2 ]
机构
[1] School of Science and Engineering, Tokyo Denki University Ishizaka, Hatoyama-machi, Hiki-gun, Saitama
[2] Graduate School of Science and Engineering, Tokyo Denki University Ishizaka, Hatoyama-machi, Hiki-gun, Saitama
来源
IEEJ Transactions on Electronics, Information and Systems | 2019年 / 139卷 / 05期
基金
日本学术振兴会;
关键词
EEG; Mental tasks; Speech imagery;
D O I
10.1541/ieejeiss.139.588
中图分类号
学科分类号
摘要
This study is to examine EEG frequency components relating the intention (“I want to drink”) to “with/without speech imagery” when drink-images are displayed on the PC screen. An experimental process was set in 8 s as followings; fixation image display (1 s), drink image display (2.5 s), intention check (1.5 s), fixation image display (1 s), and speech imagery period (2 s). It was repeated 100 times. EEG was measured with 10−20 system in a magnetically shielded room with BPF (0.08−100 Hz) and notch filter (50 Hz) for 10 subjects. In the analysis, it was averaged with 35 trials dividing between “with/without speech imagery” in an epoch (fixation image display and speech imagery) of 3 s and time-frequency analysis was done. Event-related synchronization (ERS) and desynchronization (ERD) were examined with t-test. ERS were observed in α-band on the left-hemisphere in case of “with speech imagery” for latency of 500−1500 ms and the significant differences (p<0.05) were shown in the left-hemisphere and occipital-region. It is suggested the useful information of selecting electrode positions to detect EEG frequency components for judgment of “with/without speech imagery” in the intention. In addition, an examination to localize electrode was conducted. As a result, it is suggested that averaged RMS value for each frequency component gives useful information even when localized to 9 electrodes. © 2019 The Institute of Electrical Engineers of Japan.
引用
收藏
页码:588 / 595
页数:7
相关论文
共 13 条
  • [1] Hasegawa R., Development and future of brain-machine interface, J. IEICE, 92, 12, pp. 1066-1075, (2008)
  • [2] Hasegawa R., EEG-based brain-machine interfaces for practical communication aid, J. IEICE, 95, 9, pp. 834-839, (2012)
  • [3] Ma J., Zhang Y., Cichocki A., Matsuno F., A novel EOG/EEG hybrid human–machine interface adopting eye movements and ERPs: Application to robot control, IEEE Transactions on Biomedical Engineering, 62, 3, pp. 76-889, (2015)
  • [4] Sekimoto M., Shimono Y., Akao A., Isomura T., Ogawa Y., Qi H., Kotani K., Jimbo Y., Basic research for development of a multimodal AR-BCI, IEEJ Trans. EIS, 136, 9, pp. 1291-1297, (2016)
  • [5] Wang Y., Gao X., Hong B., Jia C., Gao S., Brain-computer interfaces based on visual evoked potentials, IEEE Engineering in Medicine and Biology Magazine, 27, 5, pp. 64-71, (2008)
  • [6] Nezamfar H., Salehi S.S.M., Moghadamfalahi M., Erdogmus D., Flashtype<sup>TM</sup>: A context-aware c-VEP-based BCI typing interface using EEG signals, IEEE Journal of Selected Topics in Signal Processing, 10, 5, pp. 932-941, (2016)
  • [7] Matsuno S., Oh M., Aizawa S., Itakura N., Mizuno T., Mito K., A multiple choices method for transient type VEP brain computer interface by changing luminance and lighting interval of indicators, IEEJ Trans. EIS, 137, 4, pp. 616-620, (2017)
  • [8] Pfurtscheller G., Lopes da Silva F.H., Event-related EEG/MEG synchronization and desynchronization: Basic principles, Clinical Neurophysiology, 110, 11, pp. 1842-1857, (1999)
  • [9] Matsumoto M., Hori J., Classification of silent speech using support vector machine and relevance vector machine, Applied Soft Computing, 20, pp. 95-102, (2014)
  • [10] Wang L., Zhang X., Zhong X., Zhang Y., Analysis and classification of speech imagery EEG for BCI, Biomedical Signal Processing and Control, 8, 6, pp. 901-908, (2013)