A context-enhanced sentence representation learning method for close domains with topic modeling

被引:3
作者
Li, Shuangyin [1 ]
Chen, Weiwei [2 ]
Zhang, Yu [3 ]
Zhao, Gansen [1 ]
Pan, Rong [2 ]
Huang, Zhenhua [1 ]
Tang, Yong [1 ]
机构
[1] South China Normal Univ, Sch Comp Sci, Guangzhou, Guangdong, Peoples R China
[2] Sun Yat sen Univ, Sch Data & Comp Sci, Guangzhou, Guangdong, Peoples R China
[3] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Sentence representations learning; Closed domains; Bayesian sentence embedding; Bi-directional context-enhanced; Semantic interpretability; Topic modeling; SHORT TEXT;
D O I
10.1016/j.ins.2022.05.113
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Sentence representation approaches have been widely used and proven to be effective in many text modeling tasks and downstream applications. Many recent proposals are avail-able on learning sentence representations based on deep neural frameworks. However, these methods are pre-trained in open domains and depend on the availability of large-scale data for model fitting. As a result, they may fail in some special scenarios, where data are sparse and embedding interpretations are required, such as legal, medical, or technical fields. In this paper, we present an unsupervised learning method to exploit representa-tions of sentences for some closed domains via topic modeling. We reformulate the infer-ence process of the sentences with the corresponding contextual sentences and the associated words, and propose an effective context-enhanced process called the bi-Directional Context-enhanced Sentence Representation Learning (bi-DCSR). This method takes advantage of the semantic distributions of the nearby contextual sentences and the associated words to form a context-enhanced sentence representation. To support the bi-DCSR, we develop a novel Bayesian topic model to embed sentences and words into the same latent interpretable topic space called the Hybrid Priors Topic Model (HPTM). Based on the defined topic space by the HPTM, the bi-DCSR method learns the embedding of a sentence by the two-directional contextual sentences and the words in it, which allows us to efficiently learn high-quality sentence representations in such closed domains. In addition to an open-domain dataset from Wikipedia, our method is validated using three closed-domain datasets from legal cases, electronic medical records, and technical reports. Our experiments indicate that the HPTM significantly outperforms on language modeling and topic coherence, compared with the existing topic models. Meanwhile, the bi-DCSR method does not only outperform the state-of-the-art unsupervised learning methods on closed domain sentence classification tasks, but also yields competitive performance com-pared to these established approaches on the open domain. Additionally, the visualizations of the semantics of sentences and words demonstrate the interpretable capacity of our model.(c) 2022 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页码:186 / 210
页数:25
相关论文
共 49 条
[1]  
Adi Yossi., 2017, ICLR
[2]  
[Anonymous], 2011, Advances in Neural Information Processing Systems
[3]  
[Anonymous], UNIVERSAL SENTENCE E, Patent No. 180311175
[4]  
[Anonymous], 2016, Sentence level recurrent topic model: Letting topics speak for themselves
[5]   A general framework to expand short text for topic modeling [J].
Bicalho, Paulo ;
Pita, Marcelo ;
Pedrosa, Gabriel ;
Lacerda, Anisio ;
Pappa, Gisele L. .
INFORMATION SCIENCES, 2017, 393 :66-81
[6]  
Bottou L., 2008, Proc. Adv. Neural Inf. Process. Syst., P161
[7]  
Cer D, 2018, CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018): PROCEEDINGS OF SYSTEM DEMONSTRATIONS, P169
[8]   A nonparametric model for online topic discovery with word embeddings [J].
Chen, Junyang ;
Gong, Zhiguo ;
Liu, Weiwen .
INFORMATION SCIENCES, 2019, 504 :32-47
[9]   Topic representation: Finding more representative words in topic models [J].
Chi, Jinjin ;
Ouyang, Jihong ;
Li, Changchun ;
Dong, Xueyang ;
Li, Ximing ;
Wang, Xinhua .
PATTERN RECOGNITION LETTERS, 2019, 123 :53-60
[10]   Deep Unfolding for Topic Models [J].
Chien, Jen-Tzung ;
Lee, Chao-Hsi .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (02) :318-331