CROSS-LINGUAL TRANSFER LEARNING FOR SPOKEN LANGUAGE UNDERSTANDING

被引:0
|
作者
Quynh Ngoc Thi Do [1 ]
Gaspers, Judith [1 ]
机构
[1] Amazon, Aachen, Germany
来源
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) | 2019年
关键词
Spoken Language Understanding; Transfer Learning;
D O I
暂无
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Typically, spoken language understanding ( SLU) models are trained on annotated data which are costly to gather. Aiming to reduce data needs for bootstrapping a SLU system for a new language, we present a simple but effective weight transfer approach using data from another language. The approach is evaluated with our promising multi-task SLU framework developed towards different languages. We evaluate our approach on the ATIS and a real-world SLU dataset, showing that i) our monolingual models outperform the state-of-the-art, ii) we can reduce data amounts needed for bootstrapping a SLU system for a new language greatly, and iii) while multi-task training improves over separate training, different weight transfer settings may work best for different SLU modules.
引用
收藏
页码:5956 / 5960
页数:5
相关论文
共 50 条
  • [41] Learning with noisy supervision for Spoken Language Understanding
    Raymond, Christian
    Riccardi, Giuseppe
    2008 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, VOLS 1-12, 2008, : 4989 - +
  • [42] Combining Cross-lingual and Cross-task Supervision for Zero-Shot Learning
    Pikuliak, Matus
    Simko, Marian
    TEXT, SPEECH, AND DIALOGUE (TSD 2020), 2020, 12284 : 162 - 170
  • [43] Cross-modal Transfer Learning via Multi-grained Alignment for End-to-End Spoken Language Understanding
    Zhu, Yi
    Wang, Zexun
    Liu, Hang
    Wang, Peiying
    Feng, Mingchao
    Chen, Meng
    He, Xiaodong
    INTERSPEECH 2022, 2022, : 1131 - 1135
  • [44] Curriculum-based transfer learning for an effective end-to-end spoken language understanding and domain portability
    Caubriere, Antoine
    Tomashenko, Natalia
    Laurent, Antoine
    Morin, Emmanuel
    Camelin, Nathalie
    Esteve, Yannick
    INTERSPEECH 2019, 2019, : 1198 - 1202
  • [45] End-to-end Text-to-speech for Low-resource Languages by Cross-Lingual Transfer Learning
    Chen, Yuan-Jui
    Tu, Tao
    Yeh, Cheng-chieh
    Lee, Hung-yi
    INTERSPEECH 2019, 2019, : 2075 - 2079
  • [46] MULTITASK LEARNING FOR LOW RESOURCE SPOKEN LANGUAGE UNDERSTANDING
    Meeus, Quentin
    Moens, Marie Francine
    Van Hamme, Hugo
    INTERSPEECH 2022, 2022, : 4073 - 4077
  • [47] A Joint Learning Framework With BERT for Spoken Language Understanding
    Zhang, Zhichang
    Zhang, Zhenwen
    Chen, Haoyuan
    Zhang, Zhiman
    IEEE ACCESS, 2019, 7 : 168849 - 168858
  • [48] Spoken language understanding using weakly supervised learning
    Wu, Wei-Lin
    Lu, Ru-Zhan
    Duan, Jian-Yong
    Liu, Hui
    Gao, Feng
    Chen, Yu-Quan
    COMPUTER SPEECH AND LANGUAGE, 2010, 24 (02): : 358 - 382
  • [49] An Active Learning Approach for Statistical Spoken Language Understanding
    Garcia, Fernando
    Hurtado, Lluis-F.
    Sanchis, Emilio
    Segarra, Encarna
    PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS, COMPUTER VISION, AND APPLICATIONS, 2011, 7042 : 565 - 572
  • [50] Exploring the Data Efficiency of Cross-Lingual Post-Training in Pretrained Language Models
    Lee, Chanhee
    Yang, Kisu
    Whang, Taesun
    Park, Chanjun
    Matteson, Andrew
    Lim, Heuiseok
    APPLIED SCIENCES-BASEL, 2021, 11 (05): : 1 - 15