Bridging Video and Text: A Two-Step Polishing Transformer for Video Captioning

被引:18
作者
Xu, Wanru [1 ,2 ]
Miao, Zhenjiang [1 ,2 ]
Yu, Jian [3 ,4 ]
Tian, Yi [3 ,4 ]
Wan, Lili [1 ,2 ]
Ji, Qiang [5 ]
机构
[1] Beijing Jiaotong Univ, Inst Informat Sci, Beijing 100044, Peoples R China
[2] Beijing Key Lab Adv Informat Sci & Network Techno, Beijing 100044, Peoples R China
[3] Beijing Jiaotong Univ, Sch Comp & Informat Technol, Beijing 100044, Peoples R China
[4] Beijing Key Lab Traff Data Anal & Min, Beijing 100044, Peoples R China
[5] Rensselaer Polytech Inst, Dept Elect & Comp Engn, Troy, NY 12180 USA
关键词
Semantics; Visualization; Decoding; Transformers; Task analysis; Planning; Training; Video captioning; transformer; polishing mechanism; cross-modal modeling; NETWORKS;
D O I
10.1109/TCSVT.2022.3165934
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Video captioning is a joint task of computer vision and natural language processing, which aims to describe the video content using several natural language sentences. Nowadays, most methods cast this task as a mapping problem, which learns a mapping from visual features to natural language and generates captions directly from videos. However, the underlying challenge of video captioning, i.e., sequence to sequence mapping across the different domains, is still not well handled. To address these problems, we introduce the polishing mechanism in an attempt to mimic human polishing process and propose a generate-and-polish framework for video captioning. In this paper, we propose a two-step transformer based polishing network (TSTPN) consisting of two sub-modules: the generation-module is to generate the caption candidate and the polishing-module is to gradually refine the generated candidate. Specifically, the candidate provides a global information of the visual contents in a semantically-meaningful order, where it is firstly considered as a semantic intersnubber to bridge the semantic gap between the text and video, with the cross-modal attention mechanism for better cross-modal modeling; and it secondly provides a global planning ability to maintain the semantic consistency and fluency of the whole sentence for better sequence mapping. In experiments, we present adequate evaluations to show that the proposed TSTPN achieves the comparable and even better performance than the state-of-the-art methods on the benchmark datasets.
引用
收藏
页码:6293 / 6307
页数:15
相关论文
共 58 条
[1]   Discriminative Latent Semantic Graph for Video Captioning [J].
Bai, Yang ;
Wang, Junyan ;
Long, Yang ;
Hu, Bingzhang ;
Song, Yang ;
Pagnucco, Maurice ;
Guan, Yu .
PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, :3556-3564
[2]  
Banerjee Satanjeev, 2005, P ACL WORKSH INTR EX, P65
[3]  
Chen D., 2011, P 49 ANN M ASS COMP, P190
[4]   A Semantics-Assisted Video Captioning Model Trained With Scheduled Sampling [J].
Chen, Haoran ;
Lin, Ke ;
Maye, Alexander ;
Li, Jianmin ;
Hu, Xiaolin .
FRONTIERS IN ROBOTICS AND AI, 2020, 7
[5]  
Chen JW, 2019, Arxiv, DOI arXiv:1905.01077
[6]  
Chen M., 2018, PMLR, P847
[7]   Less Is More: Picking Informative Frames for Video Captioning [J].
Chen, Yangyu ;
Wang, Shuhui ;
Zhang, Weigang ;
Huang, Qingming .
COMPUTER VISION - ECCV 2018, PT XIII, 2018, 11217 :367-384
[8]  
Chin-Yew L., 2004, Text Summarization Branches Out, 2004, P74
[9]   Syntax-Guided Hierarchical Attention Network for Video Captioning [J].
Deng, Jincan ;
Li, Liang ;
Zhang, Beichen ;
Wang, Shuhui ;
Zha, Zhengjun ;
Huang, Qingming .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (02) :880-892
[10]  
Fang KC, 2019, AAAI CONF ARTIF INTE, P8271