Performance Improvement on Traditional Chinese Task-Oriented Dialogue Systems With Reinforcement Learning and Regularized Dropout Technique

被引:3
|
作者
Sheu, Jeng-Shin [1 ]
Wu, Siang-Ru [1 ]
Wu, Wen-Hung [2 ]
机构
[1] Natl Yunlin Univ Sci & Technol, Dept Comp Sci & Informat Engn, Yunlin 640002, Taiwan
[2] Ponddy Educ Taiwan Ltd, New Taipei 231, Taiwan
关键词
Task analysis; Reinforcement learning; Computational modeling; Artificial intelligence; Tokenization; Data models; NLP; regularized dropout; reinforcement learning; task-oriented dialogue;
D O I
10.1109/ACCESS.2023.3248796
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The development of conversational voice assistant applications has been in full swing around the world. This paper aims to develop traditional Chinese multi-domain task-oriented dialogue (TOD) systems. It is typically implemented using pipeline approach, where submodules are optimized independently, resulting in inconsistencies with each other. Instead, this paper implements end-to-end multi-domain TOD models using pre-trained deep neural networks (DNNs). This allows us to integrate all the submodules into one single DNN model to solve the inconsistencies. Data shortages are common in conversational natural language processing (NLP) tasks using DNN models. In this regard, dropout regularization has been widely used to improve overfitting caused by insufficient training dataset. However, the randomness it introduces leads to non-negligible discrepancies between training and inference. On the other hand, pre-trained language models have successfully provided effective regularization for NLP tasks. An inherent disadvantage is that fine-tuning the pre-trained language model suffers from exposure bias and loss-evaluation mismatch. To this end, we propose a reinforcement learning (RL) approach to address both issues. Furthermore, we adopt a method called regularized dropout (R-Drop) to improve the inconsistency in dropout layers of DNNs. Experimental results show that both our proposed RL approach and the R-Drop technique can significantly improve the joint target accuracy (JGA) score and combined score of traditional Chinese TOD system in tasks of dialogue state tracking (DST) and end-to-end sentence prediction, respectively.
引用
收藏
页码:19849 / 19862
页数:14
相关论文
共 21 条
  • [21] PPO2: Location Privacy-Oriented Task Offloading to Edge Computing Using Reinforcement Learning for Intelligent Autonomous Transport Systems
    Gao, Honghao
    Huang, Wanqiu
    Liu, Tong
    Yin, Yuyu
    Li, Youhuizi
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (07) : 7599 - 7612