An Adversarial Multi-task Learning Method for Chinese Text Correction with Semantic Detection

被引:1
作者
Wang, Fanyu [1 ]
Xie, Zhenping [1 ]
机构
[1] Jiangnan Univ, Sch Artificial Intelligence & Comp Sci, Wuxi 214122, Peoples R China
来源
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT II | 2022年 / 13530卷
基金
中国国家自然科学基金;
关键词
Chinese text correction; Adversarial learning; Multi-task learning; Text semantic modeling;
D O I
10.1007/978-3-031-15931-2_14
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Text correction, especially the semantic correction of more widely used scenes, is strongly required to improve, for the fluency and writing efficiency of the text. An adversarial multi-task learning method is proposed to enhance the modeling and detection ability of character polysemy in Chinese sentence context. Wherein, two models, the masked language model and scoring language model, are introduced as a pair of not only coupled but also adversarial learning tasks. Moreover, the Monte Carlo tree search strategy and a policy network are introduced to accomplish the efficient Chinese text correction task with semantic detection. The experiments are executed on three datasets and five comparable methods, and the experimental results show that our method can obtain good performance in Chinese text correction task for better semantic rationality.
引用
收藏
页码:159 / 173
页数:15
相关论文
共 34 条
[1]   A Survey of Monte Carlo Tree Search Methods [J].
Browne, Cameron B. ;
Powley, Edward ;
Whitehouse, Daniel ;
Lucas, Simon M. ;
Cowling, Peter I. ;
Rohlfshagen, Philipp ;
Tavener, Stephen ;
Perez, Diego ;
Samothrakis, Spyridon ;
Colton, Simon .
IEEE TRANSACTIONS ON COMPUTATIONAL INTELLIGENCE AND AI IN GAMES, 2012, 4 (01) :1-43
[2]   Multitask learning [J].
Caruana, R .
MACHINE LEARNING, 1997, 28 (01) :41-75
[3]  
Cheng XY, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P871
[4]  
Clark K., 2020, 8 INT C LEARNING REP, DOI [DOI 10.48550/ARXIV.2003.10555, 10.48550/arXiv.2003.10555]
[5]  
Cui Yiming, 2020, REVISITING PRETRAINE, P657, DOI DOI 10.18653/V1/2020.FINDINGS-EMNLP.58
[6]  
Deng LM, 2020, AAAI CONF ARTIF INTE, V34, P7643
[7]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[8]  
Ghazvininejad Marjan, 2016, P 2016 C EMP METH NA, P1183, DOI [10.18653/v1/D16-1126, DOI 10.18653/V1/D16-1126]
[9]  
Ghufron M. A., 2018, Lingua Cultura, V12, P395, DOI [10.21512/lc.v12i4.4582, DOI 10.21512/LC.V12I4.4582]
[10]  
Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672