Multi-hop question answering using sparse graphs

被引:0
|
作者
Hemmati, Nima [1 ]
Ghassem-Sani, Gholamreza [1 ]
机构
[1] Sharif Univ Technol, Comp Engn Dept, Tehran, Iran
关键词
Natural language processing; Multi -hop question answering; Deep learning; Graph convolutional network; Attention mechanism;
D O I
10.1016/j.engappai.2023.107128
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multi-hop question answering (QA) across multiple documents requires a deep understanding of relationships between entities in documents, questions, and answer candidates. Graph Neural Networks (GNNs) have emerged as a promising tool for multi-hop QA tasks. These models often suffer from increasing computational and model complexity, which makes them inefficient for real-world applications with limited resources. In this paper, we propose a graph-based approach called Sparse Graph-based Multi-hop Question Answering system (SG-MQA), which provides a throughout examination of the mentioned challenges and presents appropriate measures to address them. We propose a novel approach based on the Relational Graph Convolutional Network (R-GCN) that reduces the model complexity and improves its performance. We have utilized various strategies and conducted multiple experiments to achieve this goal. We show the efficacy of the proposed approach by examining the results of experiments on two QA datasets, namely WikiHop and HotpotQA. The SG-MQA model outperforms all the state-of-the-art (SOTA) methods on WikiHop and increases the accuracy of the best previous approach from 74.4% to 78.3%. Additionally, it achieves acceptable performance on HotpotQA. Although, according to the F1 measure, the performance of SG-MQA is inferior to that of the SOTA model, it is comparable to that of all other approaches. On the other hand, based on the Exact Match (EM) measure, SG-MQA shows comparable performance to that of the SOTA model and outperforms all other approaches.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] TapasQA - Question Answering on Statistical Plots Using Google TAPAS
    Jain, Himanshu
    Jayaraman, Sneha
    Sooryanath, I. T.
    Mamatha, H. R.
    THIRD INTERNATIONAL CONFERENCE ON IMAGE PROCESSING AND CAPSULE NETWORKS (ICIPCN 2022), 2022, 514 : 63 - 77
  • [42] Scene text visual question answering by using YOLO and STN
    Nourali K.
    Dolkhani E.
    International Journal of Speech Technology, 2024, 27 (01) : 69 - 76
  • [43] Malayalam Question Answering System Using Deep Learning Approaches
    Rahmath, Reji K.
    Raj, P. C. Reghu
    Rafeeque, P. C.
    IETE JOURNAL OF RESEARCH, 2023, 69 (12) : 8889 - 8901
  • [44] Multihop Question Answering by Using Sequential Path Expansion With Backtracking
    Alagha, Iyad
    IEEE ACCESS, 2022, 10 : 76842 - 76854
  • [45] MVARN: Multi-view Attention Relation Network for Figure Question Answering
    Wang, Yingdong
    Wu, Qingfeng
    Lin, Weiqiang
    Ma, Linjian
    Li, Ying
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT III, KSEM 2023, 2023, 14119 : 30 - 38
  • [46] MOQAS: Multi-objective question answering system
    Tohidi, Nasim
    Hasheminejad, Seyed Mohammad Hossein
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2019, 36 (04) : 3495 - 3512
  • [47] Attention-based Multi-hop Reasoning for Knowledge Graph
    Wang, Zikang
    Li, Linjing
    Zeng, Daniel Dajun
    Chen, Yue
    2018 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENCE AND SECURITY INFORMATICS (ISI), 2018, : 211 - 213
  • [48] TEBC-Net: An Effective Relation Extraction Approach for Simple Question Answering over Knowledge Graphs
    Li, Jianbin
    Qu, Ketong
    Yan, Jingchen
    Zhou, Liting
    Cheng, Long
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT I, 2021, 12815 : 154 - 165
  • [49] Biometric surveillance using visual question answering
    Toor, Andeep S.
    Wechsler, Harry
    Nappi, Michele
    PATTERN RECOGNITION LETTERS, 2019, 126 : 111 - 118
  • [50] Question Answering System Using Web Snippets
    Menaha, R.
    Surya, Udhaya A.
    Nandhni, K.
    Ishwarya, M.
    2017 INTERNATIONAL CONFERENCE ON I-SMAC (IOT IN SOCIAL, MOBILE, ANALYTICS AND CLOUD) (I-SMAC), 2017, : 387 - 390