Predicting temporal performance drop of deployed production spoken language understanding models

被引:0
|
作者
Do, Quynh [1 ]
Gaspers, Judith [1 ]
Sorokin, Daniil [1 ]
Lehnen, Patrick [1 ]
机构
[1] Amazon Alexa AI, Berlin, Germany
来源
INTERSPEECH 2021 | 2021年
关键词
D O I
10.21437/Interspeech.2021-580
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
In deployed real-world spoken language understanding (SLU) applications, data continuously flows into the system. This leads to distributional differences between training and application data that can deteriorate model performance. While regularly retraining the deployed model with new data helps mitigating this problem, it implies significant computational and human costs. In this paper, we develop a method, which can help guiding decisions on whether a model is safe to keep in production without notable performance loss or needs to be retrained. Towards this goal, we build a performance drop regression model for an SLU model that was trained offline to detect a potential model drift in the production phase. We present a wide range of experiments on multiple real-world datasets, indicating that our method is useful for guiding decisions in the SLU model development cycle and to reduce costs for model retraining.
引用
收藏
页码:1249 / 1253
页数:5
相关论文
共 50 条
  • [1] TEMPORAL STRUCTURE OF SPOKEN LANGUAGE UNDERSTANDING
    MARSLENWILSON, W
    TYLER, LK
    COGNITION, 1980, 8 (01) : 1 - 71
  • [2] Temporal Generalization for Spoken Language Understanding
    Gaspers, Judith
    Kumar, Anoop
    Ver Steeg, Greg
    Galstyan, Aram
    Ai, Amazon Alexa
    2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, NAACL-HLT 2022, 2022, : 37 - 44
  • [3] Discriminative Models for Spoken Language Understanding
    Wang, Ye-Yi
    Acero, Alex
    INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, VOLS 1-5, 2006, : 2426 - 2429
  • [4] RNN TRANSDUCER MODELS FOR SPOKEN LANGUAGE UNDERSTANDING
    Thomas, Samuel
    Kuo, Hong-Kwang J.
    Saon, George
    Tuske, Zoltan
    Kingsbury, Brian
    Kurata, Gakuto
    Kons, Zvi
    Hoory, Ron
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7493 - 7497
  • [5] On the Evaluation of Speech Foundation Models for Spoken Language Understanding
    Arora, Siddhant
    Pasad, Ankita
    Chien, Chung-Ming
    Han, Jionghao
    Sharma, Roshan
    Jung, Jee-weon
    Dhamyal, Hira
    Chen, William
    Shona, Suwon
    Lee, Hung-yi
    Livescu, Karen
    Watanabe, Shinji
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 11923 - 11938
  • [6] JOINT GENERATIVE AND DISCRIMINATIVE MODELS FOR SPOKEN LANGUAGE UNDERSTANDING
    Dinarelli, Marco
    Moschitti, Alessandro
    Riccardi, Giuseppe
    2008 IEEE WORKSHOP ON SPOKEN LANGUAGE TECHNOLOGY: SLT 2008, PROCEEDINGS, 2008, : 61 - 64
  • [7] Data Augmentation for Spoken Language Understanding via Pretrained Language Models
    Peng, Baolin
    Zhu, Chenguang
    Zeng, Michael
    Gao, Jianfeng
    INTERSPEECH 2021, 2021, : 1219 - 1223
  • [8] Improving Conversation-Context Language Models with Multiple Spoken Language Understanding Models
    Masumura, Ryo
    Tanaka, Tomohiro
    Ando, Atsushi
    Kamiyama, Hosana
    Oba, Takanobu
    Kobashikawa, Satoshi
    Aono, Yushi
    INTERSPEECH 2019, 2019, : 834 - 838
  • [9] Jointly predicting dialog act and named entity for spoken language understanding
    Jeong, Minwoo
    Lee, Gary Geunbae
    2006 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, 2006, : 66 - +
  • [10] ON-LINE ADAPTATION OF SEMANTIC MODELS FOR SPOKEN LANGUAGE UNDERSTANDING
    Bayer, Ali Orkan
    Riccardi, Giuseppe
    2013 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING (ASRU), 2013, : 90 - 95