Multi-scale features with temporal information guidance for video captioning

被引:0
作者
Zhao, Hong [1 ]
Chen, Zhiwen [2 ]
Yang, Yi [2 ,3 ]
机构
[1] Lanzhou Univ Technol, Sch Comp & Commun, Lanzhou 730050, Gansu, Peoples R China
[2] Lanzhou Univ, Sch Informat Sci Engn, Lanzhou 730000, Gansu, Peoples R China
[3] Key Lab Artificial Intelligence & Comp Power Techn, Lanzhou, Gansu, Peoples R China
基金
中国国家自然科学基金;
关键词
Video captioning; Multi-scale features; Temporal information; Transformer; Gated decoding module; TRANSFORMER; NETWORK;
D O I
10.1016/j.engappai.2024.109102
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Video captioning aims to automatically generate a textual description for a video, which is a challenging task and has drawn attention recently. Despite existing methods have achieved impressive performance, two challenging problems are remaining to be solved. (1) In the feature encoding stage, existing methods only focus on local features or global features to improve the accuracy or readability of sentences generated, resulting in the underutilization of useful information for the given video. (2) In the decoder stage, vanilla Transformer is usually used to reason about visual relations to generate the textual captions, which is not making good use of the inter-frame temporal information, leads to the relation ambiguity and bad readability for generated captions. To solve these problems, we propose a method of video captioning based on multi-scale feature with temporal information guidance for video captioning. Firstly, the pre-training model CLIP is employed to extract video features. Secondly, the global and local features are encoded separately to learn the overall and detailed information of the video and construct multi-scale features. Finally, the gating unit is used to alleviate the problem which cannot make good use of contextual temporal information in existing decoder module base Transformer. Extensive experiments on two publicly available datasets show that the proposed model improves 4.7%, 2.2%, 0.6%, 2.0% on the MSR-VTT dataset, and 5.1%, 9.0%, 5.8%, 6.7% on the MSVD dataset compared to the best model in the comparison method in the BLEU, METEOR, ROUGE-L, and CIDEr metrics, which demonstrates the ability of our method to achieve more competitive performance.
引用
收藏
页数:10
相关论文
共 54 条
[1]  
Banerjee S., 2005, P ACL WORKSH INTR EX, P65
[2]  
Carion Nicolas, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12346), P213, DOI 10.1007/978-3-030-58452-8_13
[3]   MGNet: Mutual-guidance network for few-shot semantic segmentation [J].
Chang, Zhaobin ;
Lu, Yonggang ;
Wang, Xiangwen ;
Ran, Xingcheng .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 116
[4]  
Chen SX, 2019, AAAI CONF ARTIF INTE, P8191
[5]  
Cui Y., 2022, PREPRINT
[6]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[7]   Syntax-Guided Hierarchical Attention Network for Video Captioning [J].
Deng, Jincan ;
Li, Liang ;
Zhang, Beichen ;
Wang, Shuhui ;
Zha, Zhengjun ;
Huang, Qingming .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (02) :880-892
[8]   Image Captioning using Hybrid LSTM-RNN with Deep Features [J].
Deorukhkar, Kalpana Prasanna ;
Ket, Satish .
SENSING AND IMAGING, 2022, 23 (01)
[9]  
Devlin J., 2018, ARXIV
[10]  
Doddington G., 2002, Automatic evaluation of machine translation quality using n-gram co-occurrence statistics, DOI [DOI 10.3115/1289189.1289273, DOI 10.1109/TPAMI.2021.3054824]