TMT: A Transformer-based Modal Translator for Improving Multimodal Sequence Representations in Audio Visual Scene-aware Dialog

被引:4
作者
Li, Wubo [1 ]
Jiang, Dongwei [1 ]
Zou, Wei [1 ]
Li, Xiangang [1 ]
机构
[1] Didi Chuxing, Beijing, Peoples R China
来源
INTERSPEECH 2020 | 2020年
关键词
multimodal learning; audio-visual scene-aware dialog; neural machine translation; multi-task learning;
D O I
10.21437/Interspeech.2020-2359
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Audio Visual Scene-aware Dialog (AVSD) is a task to generate responses when discussing about a given video. The previous state-of-the-art model shows superior performance for this task using Transformer-based architecture. However, there remain some limitations in learning better representation of modalities. Inspired by Neural Machine Translation (NMT), we propose the Transformer-based Modal Translator (TMT) to learn the representations of the source modal sequence by translating the source modal sequence to the related target modal sequence in a supervised manner. Based on Multimodal Transformer Networks (MTN), we apply TMT to video and dialog, proposing MTN-TMT for the video-grounded dialog system. On the AVSD track of the Dialog System Technology Challenge 7, MTN-TMT outperforms the MTN and other submission models in both Video and Text task and Text Only task. Compared with MTN, MTN-TMT improves all metrics, especially, achieving relative improvement up to 14.1% on CIDEr.
引用
收藏
页码:3501 / 3505
页数:5
相关论文
共 32 条
[1]  
Alamri H., 2019, DSTC7 WORKSH AAAI
[2]  
[Anonymous], 2011, P 49 ANN M ASS COMPU
[3]   Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset [J].
Carreira, Joao ;
Zisserman, Andrew .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4724-4733
[4]  
Denkowski M., 2014, P 9 WORKSH STAT MACH, P376
[5]   Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet? [J].
Hara, Kensho ;
Kataoka, Hirokatsu ;
Satoh, Yutaka .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6546-6555
[6]  
Hershey S, 2017, INT CONF ACOUST SPEE, P131, DOI 10.1109/ICASSP.2017.7952132
[7]  
Hori C, 2019, INT CONF ACOUST SPEE, P2352, DOI [10.1109/icassp.2019.8682583, 10.1109/ICASSP.2019.8682583]
[8]   Attention-Based Multimodal Fusion for Video Description [J].
Hori, Chiori ;
Hori, Takaaki ;
Lee, Teng-Yok ;
Zhang, Ziming ;
Harsham, Bret ;
Hershey, John R. ;
Marks, Tim K. ;
Sumi, Kazuhiko .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :4203-4212
[9]  
Le H, 2019, 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019), P5612
[10]  
King DB, 2015, ACS SYM SER, V1214, P1, DOI 10.1021/bk-2015-1214.ch001