aMV-LSTM: an attention-based model with multiple positional text matching

被引:0
|
作者
Belkacem, Thiziri [1 ]
Dkaki, Taoufiq [2 ]
Moreno, Jose G. [1 ]
Boughanem, Mohand [1 ]
机构
[1] Paul Sabatier Univ, IRIT Lab, Toulouse, France
[2] Jean Jaures Univ, IRIT Lab, Toulouse, France
来源
SAC '19: PROCEEDINGS OF THE 34TH ACM/SIGAPP SYMPOSIUM ON APPLIED COMPUTING | 2019年
关键词
Attention models; positional; text representation; text matching;
D O I
10.1145/3297280.3297355
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Deep models are getting a wide interest in recent NLP and IR stateof- the-art. Among the proposed models, position-based models and attention-based models take into account the word position in the text, in the former, and the importance of a word among other words in the latter. The positional information are some of the important features that help text representation learning. However, the importance of a given word among others in a given text, which is an important aspect in text matching, is not considered in positional features. In this paper, we propose a model that combines position-based representation learning approach with the attention-based weighting process. The latter learns an importance coefficient for each word of the input text. We propose an extension of a position-based model MV-LSTM with an attention layer, allowing a parameterizable architecture. We believe that when the model is aware of both word position and importance, the learned representations will get more relevant features for the matching process. Our model, namely aMV-LSTM, learns the attention based coefficients to weight words of the different input sentences, before computing their position-based representations. Experimental results, in question/ answer matching and question pairs identification tasks, show that the proposed model outperforms the MV-LSTM baseline and several state-of-the-art models.
引用
收藏
页码:788 / 795
页数:8
相关论文
共 50 条
  • [21] Describing Video With Attention-Based Bidirectional LSTM
    Bin, Yi
    Yang, Yang
    Shen, Fumin
    Xie, Ning
    Shen, Heng Tao
    Li, Xuelong
    IEEE TRANSACTIONS ON CYBERNETICS, 2019, 49 (07) : 2631 - 2641
  • [22] Residual attention-based LSTM for video captioning
    Li, Xiangpeng
    Zhou, Zhilong
    Chen, Lijiang
    Gao, Lianli
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2019, 22 (02): : 621 - 636
  • [23] Residual attention-based LSTM for video captioning
    Xiangpeng Li
    Zhilong Zhou
    Lijiang Chen
    Lianli Gao
    World Wide Web, 2019, 22 : 621 - 636
  • [24] CovTiNet: Covid text identification network using attention-based positional embedding feature fusion
    Md. Rajib Hossain
    Mohammed Moshiul Hoque
    Nazmul Siddique
    Iqbal H. Sarker
    Neural Computing and Applications, 2023, 35 : 13503 - 13527
  • [25] Attention-Based Neural Text Segmentation
    Badjatiya, Pinkesh
    Kurisinkel, Litton J.
    Gupta, Manish
    Varma, Vasudeva
    ADVANCES IN INFORMATION RETRIEVAL (ECIR 2018), 2018, 10772 : 180 - 193
  • [26] CovTiNet: Covid text identification network using attention-based positional embedding feature fusion
    Hossain, Md. Rajib
    Hoque, Mohammed Moshiul
    Siddique, Nazmul
    Sarker, Iqbal H. H.
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (18): : 13503 - 13527
  • [27] Attention-based multimodal image matching
    Moreshet, Aviad
    Keller, Yosi
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2024, 241
  • [28] Attention-based Text Recognition in the Wild
    Yan, Zhi-Chen
    Yu, Stephanie A.
    PROCEEDINGS OF THE 1ST INTERNATIONAL CONFERENCE ON DEEP LEARNING THEORY AND APPLICATIONS (DELTA), 2020, : 42 - 49
  • [29] Entity recognition in Chinese clinical text using attention-based CNN-LSTM-CRF
    Buzhou Tang
    Xiaolong Wang
    Jun Yan
    Qingcai Chen
    BMC Medical Informatics and Decision Making, 19
  • [30] Entity recognition in Chinese clinical text using attention-based CNN-LSTM-CRF
    Tang, Buzhou
    Wang, Xiaolong
    Yan, Jun
    Chen, Qingcai
    BMC MEDICAL INFORMATICS AND DECISION MAKING, 2019, 19 (Suppl 3)