Gotta: Generative Few-shot Question Answering by Prompt-based Cloze Data Augmentation

被引:0
|
作者
Chen, Xiusi [1 ]
Zhang, Yu [2 ]
Deng, Jinliang [3 ]
Jiang, Jyun-Yu [4 ]
Wang, Wei [1 ]
机构
[1] Univ Calif Los Angeles, Los Angeles, CA 90095 USA
[2] Univ Illinois, Urbana, IL USA
[3] Univ Technol Sydney, Sydney, NSW, Australia
[4] Amazon Search, Palo Alto, CA USA
关键词
question answering; knowledge base; entity; data augmentation;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Few-shot question answering (QA) aims at precisely discovering answers to a set of questions from context passages while only a few training samples are available. Although existing studies have made some progress and can usually achieve proper results, they suffer from understanding deep semantics for reasoning out the questions. In this paper, we develop Gotta, a Generative prOmpTbased daTa Augmentation framework to mitigate the challenge above. Inspired by the human reasoning process, we propose to integrate the doze task to enhance few-shot QA learning. Following the recent success of prompt-tuning, we present the doze task in the same format as the main QA task, allowing the model to learn both tasks seamlessly together to fully take advantage of the power of prompt-tuning. Extensive experiments on widely used benchmarks demonstrate that Gotta consistently outperforms competitive baselines, validating the effectiveness of our proposed prompt -tuning -based doze task, which not only fine-tunes language models but also learns to guide reasoning in QA tasks. Further analysis shows that the prompt-based loss incorporates the auxiliary task better than the multi -task loss, highlighting the strength of prompt-tuning on the few-shot QA task.
引用
收藏
页码:909 / 917
页数:9
相关论文
共 50 条
  • [31] GViG: Generative Visual Grounding Using Prompt-Based Language Modeling for Visual Question Answering
    Li, Yi-Ting
    Lin, Ying-Jia
    Yeh, Chia-Jen
    Lin, Chun-Yi
    Kao, Hung-Yu
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PT VI, PAKDD 2024, 2024, 14650 : 83 - 94
  • [32] Domain-Specific Few-Shot Table Prompt Question Answering via Contrastive Exemplar Selection
    Mo, Tianjin
    Xiao, Qiao
    Zhang, Hongyi
    Li, Ren
    Wu, Yunsong
    ALGORITHMS, 2024, 17 (07)
  • [33] Few-Shot Multihop Question Answering over Knowledge Base
    Fan, Meihao
    Zhang, Lei
    Xiao, Siyao
    Liang, Yuru
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [34] Few-shot Unified Question Answering: Tuning Models or Prompts?
    Bansal, Srijan
    Yavuz, Semih
    Pang, Bo
    Bhat, Meghana
    Zhou, Yingbo
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 8200 - 8220
  • [35] Few-shot out-of-scope intent classification: analyzing the robustness of prompt-based learning
    Yiwei Jiang
    Maarten De Raedt
    Johannes Deleu
    Thomas Demeester
    Chris Develder
    Applied Intelligence, 2024, 54 : 1474 - 1496
  • [36] Prompt-Based Graph Convolution Adversarial Meta-Learning for Few-Shot Text Classification
    Gong, Ruwei
    Qin, Xizhong
    Ran, Wensheng
    APPLIED SCIENCES-BASEL, 2023, 13 (16):
  • [37] Few-shot out-of-scope intent classification: analyzing the robustness of prompt-based learning
    Jiang, Yiwei
    De Raedt, Maarten
    Deleu, Johannes
    Demeester, Thomas
    Develder, Chris
    APPLIED INTELLIGENCE, 2024, 54 (02) : 1474 - 1496
  • [38] FREDA: Few-Shot Relation Extraction Based on Data Augmentation
    Liu, Junbao
    Qin, Xizhong
    Ma, Xiaoqin
    Ran, Wensheng
    APPLIED SCIENCES-BASEL, 2023, 13 (14):
  • [39] Prompt-Based Label-Aware Framework for Few-Shot Multi-Label Text Classification
    Thaminkaew, Thanakorn
    Lertvittayakumjorn, Piyawat
    Vateekul, Peerapon
    IEEE ACCESS, 2024, 12 : 28310 - 28322
  • [40] Few-Shot Charge Prediction with Data Augmentation and Feature Augmentation
    Wang, Peipeng
    Zhang, Xiuguo
    Cao, Zhiying
    APPLIED SCIENCES-BASEL, 2021, 11 (22):