COMPRESSING TRANSFORMER-BASED ASR MODEL BY TASK-DRIVEN LOSS AND ATTENTION-BASED MULTI-LEVEL FEATURE DISTILLATION

被引:0
作者
Lv, Yongjie [1 ]
Wang, Longbiao [1 ]
Ge, Meng [1 ]
Li, Sheng [2 ]
Ding, Chenchen [2 ]
Pan, Lixin [4 ]
Wang, Yuguang [4 ]
Dang, Jianwu [1 ,3 ]
Honda, Kiyoshi [1 ]
机构
[1] Tianjin Key Laboratory of Cognitive Computing and Application, College of Intelligence and Computing, Tianjin University, Tianjin, China
[2] National Institute of Information and Communications Technology (NICT), Kyoto, Japan
[3] Japan Advanced Institute of Science and Technology, Ishikawa, Japan
[4] Huiyan Technology (TianJin) Co., Ltd., Tianjin, China
来源
ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings | 2022年 / 2022-May卷
关键词
Compendex;
D O I
暂无
中图分类号
学科分类号
摘要
引用
收藏
页码:7992 / 7996
相关论文
empty
未找到相关数据