MA-MRC: A Multi-answer Machine Reading Comprehension Dataset

被引:0
|
作者
Yue, Zhiang [1 ]
Liu, Jingping [2 ]
Zhang, Cong [3 ]
Wang, Chao [4 ]
Jiang, Haiyun [5 ]
Zhang, Yue [2 ]
Tian, Xianyang [2 ]
Cen, Zhedong [2 ]
Xiao, Yanghua [1 ]
Ruan, Tong [2 ]
机构
[1] Fudan Univ, Shanghai, Peoples R China
[2] East China Univ Sci & Technol, Shanghai, Peoples R China
[3] AECC Sichuan Gas Turbine Estab, Mianyang, Sichuan, Peoples R China
[4] Shanghai Univ, Shanghai, Peoples R China
[5] Tencent AI Lab, Shenzhen, Peoples R China
关键词
Machine Reading Comprehension; Multiple Answer; Knowledge Graph;
D O I
10.1145/3539618.3592015
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine reading comprehension (MRC) is an essential task for many question-answering applications. However, existing MRC datasets mainly focus on data with single answer and overlook multiple answers, which are common in the real world. In this paper, we aim to construct an MRC dataset with both data of single answer and multiple answers. To achieve this purpose, we design a novel pipeline method: data collection, data cleaning, question generation and test set annotation. Based on these procedures, we construct a high-quality multi-answer MRC dataset (MA-MRC) with 129K question-answer-context samples. We implement a sequence of baselines and carry out extensive experiments on MA-MRC. According to the experimental results, MA-MRC is a challenging dataset, which can facilitate the future research on the multi-answer MRC task(1).
引用
收藏
页码:2144 / 2148
页数:5
相关论文
共 50 条
  • [21] Translucent Answer Predictions in Multi-Hop Reading Comprehension
    Bhargav, G. P. Shrivatsa
    Glass, Michael
    Garg, Dinesh
    Shevade, Shirish
    Dana, Saswati
    Khandelwal, Dinesh
    Subramaniam, L. Venkata
    Gliozzo, Alfio
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 7700 - 7707
  • [22] Answer Uncertainty and Unanswerability in Multiple-Choice Machine Reading Comprehension
    Raina, Vatsal
    Gales, Mark
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 1020 - 1034
  • [23] Verification mechanism to obtain an elaborate answer span in machine reading comprehension
    Peng, Yu
    Li, Xiaoyu
    Song, Jingkuan
    Luo, Yu
    Hu, Shijie
    Qian, Weizhong
    NEUROCOMPUTING, 2021, 466 : 80 - 91
  • [24] No Answer is Better Than Wrong Answer: A Reflection Model for Document Level Machine Reading Comprehension
    Wang, Xuguang
    Shou, Linjun
    Gong, Ming
    Duan, Nan
    Jiang, Daxin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 4141 - 4150
  • [25] ArQuAD: An Expert-Annotated Arabic Machine Reading Comprehension Dataset
    Obeidat, Rasha
    Al-Harbi, Marwa
    Al-Ayyoub, Mahmoud
    Alawneh, Luay
    COGNITIVE COMPUTATION, 2024, 16 (03) : 984 - 1003
  • [26] ESTER: A Machine Reading Comprehension Dataset for Reasoning about Event Semantic
    Han, Rujun
    Hsu, I-Hung
    Sun, Jiao
    Baylon, Julia
    Ning, Qiang
    Roth, Dan
    Peng, Nanyun
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 7543 - 7559
  • [27] Machine Reading Comprehension for Answer Re-Ranking in Customer Support Chatbots
    Hardalov, Momchil
    Koychev, Ivan
    Nakov, Preslav
    INFORMATION, 2019, 10 (03)
  • [28] QANet-based candidate answer rethink model for machine reading comprehension
    Wang Y.
    Lei C.
    International Journal of Wireless and Mobile Computing, 2021, 20 (03): : 246 - 254
  • [29] A Study of DistilBERT-Based Answer Extraction Machine Reading Comprehension Algorithm
    Li, Bo
    PROCEEDINGS OF 2024 3RD INTERNATIONAL CONFERENCE ON CYBER SECURITY, ARTIFICIAL INTELLIGENCE AND DIGITAL ECONOMY, CSAIDE 2024, 2024, : 261 - 268
  • [30] A New Multi-choice Reading Comprehension Dataset for Curriculum Learning
    Liang, Yichan
    Li, Jianheng
    Yin, Jian
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 101, 2019, 101 : 742 - 757