FedKC: Federated Knowledge Composition for Multilingual Natural Language Understanding

被引:7
作者
Wang, Haoyu [1 ]
Zhao, Handong [2 ]
Wang, Yaqing [1 ]
Yu, Tong [2 ]
Gu, Jiuxiang [2 ]
Gao, Jing [1 ]
机构
[1] Purdue Univ, W Lafayette, IN 47907 USA
[2] Adobe Res, San Jose, CA USA
来源
PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22) | 2022年
基金
美国国家科学基金会;
关键词
Federated learning; Multilingual natural language understanding;
D O I
10.1145/3485447.3511988
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Multilingual natural language understanding, which aims to comprehend multilingual documents, is an important task. Existing efforts have been focusing on the analysis of centrally stored text data, but in real practice, multilingual data is usually distributed. Federated learning is a promising paradigm to solve this problem, which trains local models with decentralized data on local clients and aggregates local models on the central server to achieve a good global model. However, existing federated learning methods assume that data are independent and identically distributed (IID), and cannot handle multilingual data, that are usually non-IID with severely skewed distributions: First, multilingual data is stored on local client devices such that there are only monolingual or bilingual data stored on each client. This makes it difficult for local models to know the information of documents in other languages. Second, the distribution over different languages could be skewed. High resource language data is much more abundant than low resource language data. The model trained on such skewed data may focus more on high resource languages but fail to consider the key information of low resource languages. To solve the aforementioned challenges of multilingual federated NLU, we propose a plug-and-play knowledge composition (KC) module, called FedKC, which exchanges knowledge among clients without sharing raw data. Specifically, we propose an effective way to calculate a consistency loss defined based on the shared knowledge across clients, which enables models trained on different clients achieve similar predictions on similar data. Leveraging this consistency loss, joint training is thus conducted on distributed data respecting the privacy constraints. We also analyze the potential risk of FedKC and provide theoretical bound to show that it is difficult to recover data from the corrupted data. We conduct extensive experiments on three public multilingual datasets for three typical NLU tasks, including paraphrase identification, question answering matching, and news classification. The experiment results show that the proposed FedKC can outperform state-of-the-art baselines on the three datasets significantly.
引用
收藏
页码:1839 / 1850
页数:12
相关论文
共 64 条
  • [1] Aluru S. S., 2020, APPL DAT SCI DEM TRA
  • [2] Andreas J., 2016, NAACL, P1545
  • [3] [Anonymous], 2015, CoRR
  • [4] [Anonymous], 2016, A Decomposable Attention Model for Natural Language Inference, DOI DOI 10.18653/V1/D16-1244
  • [5] Basu Priyam, 2021, ARXIV210613973
  • [6] Bayer Markus, 2021, ARXIV210703158
  • [7] Bozarth L., 2020, P INT AAAI C WEB SOC, V14, P60, DOI 10.1609/icwsm.v14i1.7279
  • [8] Choi E, 2018, 2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), P2174
  • [9] Conneau Alexis, 2019, CoRR abs/1911.02116
  • [10] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171