Multi-source domain adaptation method for textual emotion classification using deep and broad learning

被引:22
作者
Peng, Sancheng [1 ]
Zeng, Rong [2 ]
Cao, Lihong [1 ]
Yang, Aimin [3 ]
Niu, Jianwei [4 ]
Zong, Chengqing [5 ]
Zhou, Guodong [6 ]
机构
[1] Guangdong Univ Foreign Studies, Lab Language Engn & Comp, Guangzhou 510006, Peoples R China
[2] South China Normal Univ, Guangdong Prov Key Lab Nanophoton Funct Mat & Devi, Guangzhou 510006, Peoples R China
[3] Lingnan Normal Univ, Sch Comp Sci & Intelligence Educ, Zhanjiang 524048, Peoples R China
[4] Beihang Univ, State Key Lab Virtual Real Technol & Syst, Beijing 100191, Peoples R China
[5] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing, Peoples R China
[6] Soochow Univ, Sch Comp Sci & Technol, Suzhou, Peoples R China
关键词
Multi-domain; Emotion classification; BERT; Broad learning; Bi-LSTM;
D O I
10.1016/j.knosys.2022.110173
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing domain adaptation methods for classifying textual emotions have the propensity to focus on single-source domain exploration rather than multi-source domain adaptation. The efficacy of emotion classification is hampered by the restricted information and volume from a single source domain. Thus, to improve the performance of domain adaptation, we present a novel multi-source domain adaptation approach for emotion classification, by combining broad learning and deep learning in this article. Specifically, we first design a model to extract domain-invariant features from each source domain to the same target domain by using BERT and Bi-LSTM, which can better capture contextual features. Then we adopt broad learning to train multiple classifiers based on the domain-invariant features, which can more effectively conduct multi-label classification tasks. In addition, we design a co-training model to boost these classifiers. Finally, we carry out several experiments on four datasets by comparison with the baseline methods. The experimental results show that our proposed approach can significantly outperform the baseline methods for textual emotion classification.(c) 2022 Published by Elsevier B.V.
引用
收藏
页数:9
相关论文
共 37 条
[1]   Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture [J].
Chen, C. L. Philip ;
Liu, Zhulin .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (01) :10-24
[2]   Multiple-source domain adaptation with generative adversarial nets [J].
Chen, Chaoqi ;
Xie, Weiping ;
Wen, Yi ;
Huang, Yue ;
Ding, Xinghao .
KNOWLEDGE-BASED SYSTEMS, 2020, 199
[3]   p-Norm Broad Learning for Negative Emotion Classification in Social Networks [J].
Chen, Guanghao ;
Peng, Sancheng ;
Zeng, Rong ;
Hu, Zhongwang ;
Cao, Lihong ;
Zhou, Yongmei ;
Ouyang, Zhouhao ;
Nie, Xiangyu .
BIG DATA MINING AND ANALYTICS, 2022, 5 (03) :245-256
[4]  
Cho K., 2014, LEARNING PHRASE REPR, DOI [10.3115/v1/D14-1179, DOI 10.3115/V1/D14-1179]
[5]   Pre-Training With Whole Word Masking for Chinese BERT [J].
Cui, Yiming ;
Che, Wanxiang ;
Liu, Ting ;
Qin, Bing ;
Yang, Ziqing .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 :3504-3514
[6]  
Dai Y, 2020, AAAI CONF ARTIF INTE, V34, P7618
[7]   Unsupervised Sentiment Analysis by Transferring Multi-source Knowledge [J].
Dai, Yong ;
Liu, Jian ;
Zhang, Jian ;
Fu, Hongguang ;
Xu, Zenglin .
COGNITIVE COMPUTATION, 2021, 13 (05) :1185-1197
[8]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[9]  
Du CN, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P4019
[10]   Wasserstein based transfer network for cross-domain sentiment classification [J].
Du, Yongping ;
He, Meng ;
Wang, Lulin ;
Zhang, Haitong .
KNOWLEDGE-BASED SYSTEMS, 2020, 204