Unsupervised Domain Adaptation of Language Models for Reading Comprehension

被引:0
作者
Nishida, Kosuke [1 ]
Nishida, Kyosuke [1 ]
Saito, Itsumi [1 ]
Asano, Hisako [1 ]
Tomita, Junji [1 ]
机构
[1] NTT Corp, NTT Media Intelligence Labs, 1-1 Hikarinooka, Yokosuka, Kanagawa, Japan
来源
PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2020) | 2020年
关键词
Reading Comprehension; Domain Adaptation; Unsupervised Learning;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
This study tackles unsupervised domain adaptation of reading comprehension (UDARC). Reading comprehension (RC) is a task to learn the capability for question answering with textual sources. State-of-the-art models on RC still do not have general linguistic intelligence; i.e., their accuracy worsens for out-domain datasets that are not used in the training. We hypothesize that this discrepancy is caused by a lack of the language modeling (LM) capability for the out-domain. The UDARC task allows models to use supervised RC training data in the source domain and only unlabeled passages in the target domain. To solve the UDARC problem, we provide two domain adaptation models. The first one learns the out-domain LM and in-domain RC task sequentially. The second one is the proposed model that uses a multi-task learning approach of LM and RC. The models can retain both the RC capability acquired from the supervised data in the source domain and the LM capability from the unlabeled data in the target domain. We evaluated the models on UDARC with five datasets in different domains. The models outperformed the model without domain adaptation. In particular, the proposed model yielded an improvement of 4.3/4.2 points in EM/F1 in an unseen biomedical domain.
引用
收藏
页码:5392 / 5399
页数:8
相关论文
共 34 条
  • [1] [Anonymous], 2018, P 2018 C N AM CHAPTE
  • [2] Blitzer J., 2006, P 2006 C EMP METH NA
  • [3] Clark Peter, 2018, CoRR
  • [4] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [5] Dhingra Bhuwan, 2017, Quasar: Datasets for question answering by search and reading
  • [6] Fisch A., 2019, P 2 WORKSH MACH READ, P1, DOI [DOI 10.18653/V1/D19-5801, 10.18653/v1/D19-5801, DOI 10.18653/V1/2020.EMNLP-MAIN.687]
  • [7] Ganin Y, 2016, J MACH LEARN RES, V17
  • [8] Golub D., 2017, P 2017 C EMP METH NA, P835, DOI DOI 10.18653/V1/D17-1087
  • [9] Hermann KM, 2015, 29 ANN C NEURAL INFO, V28
  • [10] Ishii M, 2019, PR MACH LEARN RES, V101, P473