Mining Implicit Intention Using Attention-Based RNN Encoder-Decoder Model

被引:8
|
作者
Li, ChenXing [1 ]
Du, YaJun [1 ]
Wang, SiDa [1 ]
机构
[1] Xihua Univ, Sch Comp & Software Engn, Chengdu 610039, Sichuan, Peoples R China
来源
INTELLIGENT COMPUTING METHODOLOGIES, ICIC 2017, PT III | 2017年 / 10363卷
关键词
Implicit intent detection; Recurrent neural networks; Attention; Encoder-Decoder model;
D O I
10.1007/978-3-319-63315-2_36
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Nowadays, people are increasingly inclined to use social tools to express their intentions explicitly and implicitly. Most of the work is dedicated to solving the explicit intention detection, ignoring the implicit intention detection, as the former is relatively easy to solve with the classification method. In this work, we use the Attention-Based Encoder-Decoder model which is specified for the sequence-to-sequence task for user implicit intention detection. Our key idea is to leverage the model to "translate" the implicit intention into the corresponding explicit intent by using the parallel corpora built on the social data. Specifically, our model has domain adaptability since the way people express implicit intentions for different domain is variable, while the way to express explicit intentions is mostly in the same form, such as "I want to do sth". In order to demonstrate the effectiveness of our method, we conduct experiments in four domains. The results show that our method offers a powerful "translation" for the implicit intentions and consequently identifies them.
引用
收藏
页码:413 / 424
页数:12
相关论文
共 50 条
  • [21] Investigating Methods to Improve Language Model Integration for Attention-based Encoder-Decoder ASR Models
    Zeineldeen, Mohammad
    Glushko, Aleksandr
    Michel, Wilfried
    Zeyer, Albert
    Schlueter, Ralf
    Ney, Hermann
    INTERSPEECH 2021, 2021, : 2856 - 2860
  • [22] Self-Supervised Pre-Training for Attention-Based Encoder-Decoder ASR Model
    Gao, Changfeng
    Cheng, Gaofeng
    Li, Ta
    Zhang, Pengyuan
    Yan, Yonghong
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 1763 - 1774
  • [23] Code generation from a graphical user interface via attention-based encoder-decoder model
    Chen, Wen Yin
    Podstreleny, Pavol
    Cheng, Wen-Huang
    Chen, Yung-Yao
    Hua, Kai-Lung
    MULTIMEDIA SYSTEMS, 2022, 28 (01) : 121 - 130
  • [24] Enhancing lane changing trajectory prediction on highways: A heuristic attention-based encoder-decoder model
    Xiao, Xue
    Bo, Peng
    Chen, Yingda
    Chen, Yili
    Li, Keping
    PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2024, 639
  • [25] Multivariate time series forecasting via attention-based encoder-decoder framework
    Du, Shengdong
    Li, Tianrui
    Yang, Yan
    Horng, Shi-Jinn
    NEUROCOMPUTING, 2020, 388 (388) : 269 - 279
  • [26] Accurate water quality prediction with attention-based bidirectional LSTM and encoder-decoder
    Bi, Jing
    Chen, Zexian
    Yuan, Haitao
    Zhang, Jia
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
  • [27] Multiple attention-based encoder-decoder networks for gas meter character recognition
    Li, Weidong
    Wang, Shuai
    Ullah, Inam
    Zhang, Xuehai
    Duan, Jinlong
    SCIENTIFIC REPORTS, 2022, 12 (01)
  • [28] Lane-Level Heterogeneous Traffic Flow Prediction: A Spatiotemporal Attention-Based Encoder-Decoder Model
    Zheng, Yan
    Li, Wenquan
    Zheng, Wen
    Dong, Chunjiao
    Wang, Shengyou
    Chen, Qian
    IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE, 2023, 15 (03) : 51 - 67
  • [29] An attention-based row-column encoder-decoder model for text recognition in Japanese historical documents
    Ly, Nam Tuan
    Nguyen, Cuong Tuan
    Nakagawa, Masaki
    PATTERN RECOGNITION LETTERS, 2020, 136 : 134 - 141
  • [30] On Mining Conditions using Encoder-decoder Networks
    Gallego, Fernando O.
    Corchuelo, Rafael
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2, 2019, : 624 - 630