A two-stage deep neural model with capsule network for personality identification

被引:0
作者
Naseri, Zahra [1 ]
Momtazi, Saeedeh [1 ]
机构
[1] Amirkabir Univ Technol, Tehran Polytech, Comp Engn Dept, Tehran, Iran
关键词
D O I
10.1093/llc/fqac055
中图分类号
C [社会科学总论];
学科分类号
03 ; 0303 ;
摘要
People have different ways of thinking, feeling, and hence acting, which resulted in different personalities. Understanding one's personality and how it can be automatically identified considering the way he/she communicates to the world around can be challenging; but it can also be useful in many cases. Deep learning algorithms perform fairly well in text-based personality detection. However, many computational personality assessment models rely on limited domain knowledge. There are different personality models for classifying personality traits according to the definitions of psychologists. In this paper, we focus on the Myers-Briggs Type Indicator (MBTI) model and explain how a two-stage deep neural model for personality identification can use more information from text and therefore, have better performance in classifying input data. To this end, in the first stage, we use capsule neural networks to extract meaningful hidden patterns from word-level semantic representation to be used for calculating personality traits. Moreover, in the second stage of the proposed architecture, we benefit from contextualized document-level representation of text as well as statistical psychological features. Our experimental results on the Myers-Briggs Personality Type dataset from Kaggle which has been labeled based on the MBTI model show improvement in personality identification compared to the state-of-the-art models in the field.
引用
收藏
页码:667 / 678
页数:12
相关论文
共 50 条
[21]   A two-stage hybrid credit risk prediction model based on XGBoost and graph-based deep neural network [J].
Liu, Jiaming ;
Zhang, Sicheng ;
Fan, Haoyue .
Expert Systems with Applications, 2022, 195
[22]   A two-stage hybrid credit risk prediction model based on XGBoost and graph-based deep neural network [J].
Liu, Jiaming ;
Zhang, Sicheng ;
Fan, Haoyue .
EXPERT SYSTEMS WITH APPLICATIONS, 2022, 195
[23]   Deep neural network based two-stage Indian language identification system using glottal closure instants as anchor points [J].
Bhanja, Chuya China ;
Laskar, Mohammad Azharuddin ;
Laskar, Rabul Hussain ;
Bandyopadhyay, Sivaji .
JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2022, 34 (04) :1439-1454
[24]   A two-stage deep neural network for multi-norm license plate detection and recognition [J].
Kessentini, Yousri ;
Besbes, Mohamed Dhia ;
Ammar, Sourour ;
Chabbouh, Achraf .
EXPERT SYSTEMS WITH APPLICATIONS, 2019, 136 :159-170
[25]   A Two-Stage Deep Neural Network Framework for Precipitation Estimation from Bispectral Satellite Information [J].
Tao, Yumeng ;
Hsu, Kuolin ;
Ihler, Alexander ;
Gao, Xiaogang ;
Sorooshian, Soroosh .
JOURNAL OF HYDROMETEOROLOGY, 2018, 19 (02) :393-408
[26]   TDNN: A Two-stage Deep Neural Network for Prompt-independent Automated Essay Scoring [J].
Jin, Cancan ;
He, Ben ;
Hui, Kai ;
Sun, Le .
PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, 2018, :1088-1097
[27]   Speech Enhancement Based on Two-Stage Processing with Deep Neural Network for Laser Doppler Vibrometer [J].
Cai, Chengkai ;
Iwai, Kenta ;
Nishiura, Takanobu .
APPLIED SCIENCES-BASEL, 2023, 13 (03)
[28]   Fabric defect detection based on a deep convolutional neural network using a two-stage strategy [J].
Jun, Xiang ;
Wang, Jingan ;
Zhou, Jian ;
Meng, Shuo ;
Pan, Ruru ;
Gao, Weidong .
TEXTILE RESEARCH JOURNAL, 2021, 91 (1-2) :130-142
[29]   VIDEO SUPER RESOLUTION BASED ON DEEP CONVOLUTION NEURAL NETWORK WITH TWO-STAGE MOTION COMPENSATION [J].
Ren, Haoyu ;
El-Khamy, Mostafa ;
Lee, Jungwon .
2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW 2018), 2018,
[30]   Enhancing chemical synthesis: a two-stage deep neural network for predicting feasible reaction conditions [J].
Chen, Lung-Yi ;
Li, Yi-Pei .
JOURNAL OF CHEMINFORMATICS, 2024, 16 (01)