Efficient Question Answering with Question Decomposition and Multiple Answer Streams

被引:0
|
作者
Hartrumpf, Sven [1 ]
Gloeckner, Ingo [1 ]
Leveling, Johannes [2 ]
机构
[1] Fernuniv, IICS, D-58084 Hagen, Germany
[2] Dublin City Univ, CNGL, Dublin 9, Ireland
来源
EVALUATING SYSTEMS FOR MULTILINGUAL AND MULTIMODAL INFORMATION ACCESS | 2009年 / 5706卷
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The German question answering (QA) system IRSAW (Formerly: hiSicht) participated in QA@CLEF for the fifth time. IRSAW was introduced in 2007 by integrating the deep answer producer InSicht, several shallow answer producers, and a logical validator. InSicht builds on a deep QA approach: it transforms documents to semantic representations using a parser, draws inferences on semantic representations with rules, and matches semantic representations derived from questions and documents. hiSicht was improved for QA@CLEF 2008 mainly ill the following two areas. The coreference resolver was trained oil question series instead of newspaper texts in order to be better applicable for follow-lip questions. Questions are decomposed by several methods on the level of semantic representations. Oil the shallow processing side, the number of answer producers was increased from two to four by adding FACT; a fact index, and SHASE, a shallow semantic network matcher. The answer validator introduced in 2007 was replaced by the faster RAVE validator designed for logic-based answer validation under time constraints. Using RAVE for merging the results of the answer producers, monolingual German man runs bilingual runs with source language English and Spanish were produced by applying the machine translation web service Promt. An error analysis shows the main problems for the precision-oriented deep answer producer InSicht and the potential offered by the recall-oriented shallow answer producers.
引用
收藏
页码:421 / +
页数:2
相关论文
共 50 条
  • [1] Will this Question be Answered? Question Filtering via Answer Model Distillation for Efficient Question Answering
    Garg, Siddhant
    Moschitti, Alessandro
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 7329 - 7346
  • [2] Unsupervised Question Decomposition for Question Answering
    Perez, Ethan
    Lewis, Patrick
    Yih, Wen-tau
    Cho, Kyunghyun
    Kiela, Douwe
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 8864 - 8880
  • [3] ANSWERING THE QUESTION OR QUESTIONING THE ANSWER?
    Robson, Debbie
    McNeill, Ann
    ADDICTION, 2018, 113 (03) : 407 - 409
  • [4] Question recommendation and answer extraction in question answering community
    Xianfeng, Yang
    Pengfei, Liu
    International Journal of Database Theory and Application, 2016, 9 (01): : 35 - 44
  • [5] Locate Before Answering: Answer Guided Question Localization for Video Question Answering
    Qian, Tianwen
    Cui, Ran
    Chen, Jingjing
    Peng, Pai
    Guo, Xiaowei
    Jiang, Yu-Gang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 4554 - 4563
  • [6] Question Condensing Networks for Answer Selection in Community Question Answering
    Wu, Wei
    Sun, Xu
    Wang, Houfeng
    PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, 2018, : 1746 - 1755
  • [8] Exploring Answer Information for Question Classification in Community Question Answering
    Wang, Jian
    Lin, Hongfei
    Dong, Hualei
    Xiong, Daping
    Yang, Zhihao
    JOURNAL OF MULTIPLE-VALUED LOGIC AND SOFT COMPUTING, 2018, 31 (1-2) : 67 - 84
  • [9] Question and Answer Classification in Czech Question Answering Benchmark Dataset
    Kusnirakova, Dasa
    Medved, Marek
    Horak, Ales
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2, 2019, : 701 - 706
  • [10] Answer formulation for question-answering
    Kosseim, L
    Plamondon, L
    Guillemette, LJ
    ADVANCES IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2003, 2671 : 24 - 34