EEG-Eye Movements Cross-Modal Decision Confidence Measurement with Generative Adversarial Networks

被引:4
作者
Fei, Cheng [1 ,2 ]
Li, Rui [1 ,2 ]
Zhao, Li-Ming [1 ,2 ]
Zheng, Wei-Long [1 ,2 ]
Lu, Bao-Liang [1 ,2 ,3 ]
机构
[1] Shanghai Jiao Tong Univ, Key Lab Shanghai Educ Commiss Intelligent Interac, Dept Comp Sci & Engn, Ctr Brain Like Comp & Machine Intelligence, 800 Dongchuan Rd, Shanghai, Peoples R China
[2] Shanghai Jiao Tong Univ, Brain Sci & Technol Res Ctr, 800 Dongchuan Rd, Shanghai, Peoples R China
[3] Shanghai Jiao Tong Univ, Sch Med, RuiJin Hosp, RuiJin Mihoyo Lab,Clin Neurosci Ctr, 197 Ruijin 2nd Rd, Shanghai 200020, Peoples R China
来源
2023 11TH INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING, NER | 2023年
基金
中国国家自然科学基金;
关键词
D O I
10.1109/NER52421.2023.10123730
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Decision confidence is an individual's feeling of correctness or optimization when making a decision. Various physiological signals, including electroencephalography (EEG) and eye movements have been studied extensively in measuring levels of decision confidence in humans. While multimodal fusion generally performs better than single-modal approaches, it requires data from different modalities at a greater cost. In particular, collection of EEG data is more complicated and time consuming while eye movement signals are much easier to acquire. To tackle this problem, we propose a crossmodal method based on generative adversarial learning. In our method, the intrinsic relationship between eye movement and EEG features in a high-level feature space can be learned in the training phase, and then we can obtain multimodal information during the test phase when only eye movements are available as inputs. Experimental results on the SEED-VPDC dataset demonstrate that our proposed method outperforms singlemodal methods trained and tested only on eye movement signals with an improvement of approximately 5.43% in accuracy, and maintains competitive performance in comparison with multimodal methods. Our cross-modal approach requires only eye movements as inputs and reduces reliance on EEG data, making the decision confidence measurement more applicable and practicable.
引用
收藏
页数:4
相关论文
共 10 条
[1]   Deep Adversarial Learning for Multi-Modality Missing Data Completion [J].
Cai, Lei ;
Wang, Zhengyang ;
Gao, Hongyang ;
Shen, Dinggang ;
Ji, Shuiwang .
KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, :1158-1166
[2]   Relating Pupil Dilation and Metacognitive Confidence during Auditory Decision-Making [J].
Lempert, Karolina M. ;
Chen, Yu Lin ;
Fleming, Stephen M. .
PLOS ONE, 2015, 10 (05)
[3]   Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories [J].
Li Fei-Fei ;
Fergus, Rob ;
Perona, Pietro .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2007, 106 (01) :59-70
[4]   Discrimination of Decision Confidence Levels from EEG Signals [J].
Li, Rui ;
Liu, Le-Dian ;
Lu, Bao-Liang .
2021 10TH INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING (NER), 2021, :946-949
[5]   Comparing Recognition Performance and Robustness of Multimodal Deep Learning Models for Multimodal Emotion Recognition [J].
Liu, Wei ;
Qiu, Jie-Lin ;
Zheng, Wei-Long ;
Lu, Bao-Liang .
IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2022, 14 (02) :715-729
[6]   Confidence and certainty: distinct probabilistic quantities for different goals [J].
Pouget, Alexandre ;
Drugowitsch, Jan ;
Kepecs, Adam .
NATURE NEUROSCIENCE, 2016, 19 (03) :366-374
[7]  
Sadras N., 2022, BIORXIV
[8]   Confidence Representation of Perceptual Decision by EEG and Eye Data in a Random Dot Motion Task [J].
Shooshtari, Shirin Vafaei ;
Sadrabadi, Jamal Esmaily ;
Azizi, Zahra ;
Ebrahimpour, Reza .
NEUROSCIENCE, 2019, 406 :510-527
[9]  
Zhao LM, 2019, I IEEE EMBS C NEUR E, P611, DOI [10.1109/ner.2019.8717055, 10.1109/NER.2019.8717055]
[10]   A multimodal approach to estimating vigilance using EEG and forehead EOG [J].
Zheng, Wei-Long ;
Lu, Bao-Liang .
JOURNAL OF NEURAL ENGINEERING, 2017, 14 (02)