Learning Video-Text Aligned Representations for Video Captioning

被引:7
|
作者
Shi, Yaya [1 ]
Xu, Haiyang [2 ]
Yuan, Chunfeng [3 ]
Li, Bing [3 ]
Hu, Weiming [3 ,4 ,5 ]
Zha, Zheng-Jun [1 ]
机构
[1] Univ Sci & Technol China, Sch Informat Sci & Technol, 96 Jinzhai Rd, Hefei 230026, Anhui, Peoples R China
[2] Alibaba Grp, 969 Wenyi West Rd, Hangzhou 311121, Zhejiang, Peoples R China
[3] Chinese Acad Sci, Inst Automat, NLPR, 95 Zhongguancun East Rd, Beijing 100190, Peoples R China
[4] Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
[5] CAS Ctr Excellence Brain Sci & Intelligence Techn, 95 Zhongguancun East Rd, Beijing 100190, Peoples R China
基金
北京市自然科学基金; 国家重点研发计划;
关键词
Video captioning; video-text alignment; aligned representation;
D O I
10.1145/3546828
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Video captioning requires that the model has the abilities of video understanding, video-text alignment, and text generation. Due to the semantic gap between vision and language, conducting video-text alignment is a crucial step to reduce the semantic gap, which maps the representations from the visual to the language domain. However, the existing methods often overlook this step, so the decoder has to directly take the visual representations as input, which increases the decoder's workload and limits its ability to generate semantically correct captions. In this paper, we propose a video-text alignment module with a retrieval unit and an alignment unit to learn video-text aligned representations for video captioning. Specifically, we firstly propose a retrieval unit to retrieve sentences as additional input which is used as the semantic anchor between visual scene and language description. Then, we employ an alignment unit with the input of the video and retrieved sentences to conduct the video-text alignment. The representations of two modal inputs are aligned in a shared semantic space. The obtained video-text aligned representations are used to generate semantically correct captions. Moreover, retrieved sentences provide rich semantic concepts which are helpful for generating distinctive captions. Experiments on two public benchmarks, i.e., VATEX and MSR-VTT, demonstrate that our method outperforms state-of-the-art performances by a large margin. The qualitative analysis shows that our method generates correct and distinctive captions.
引用
收藏
页数:21
相关论文
共 50 条
  • [1] Complementarity-Aware Space Learning for Video-Text Retrieval
    Zhu, Jinkuan
    Zeng, Pengpeng
    Gao, Lianli
    Li, Gongfu
    Liao, Dongliang
    Song, Jingkuan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (08) : 4362 - 4374
  • [2] Expert-guided contrastive learning for video-text retrieval
    Lee, Jewook
    Lee, Pilhyeon
    Park, Sungho
    Byun, Hyeran
    NEUROCOMPUTING, 2023, 536 : 50 - 58
  • [3] Incorporating the Graph Representation of Video and Text into Video Captioning
    Lu, Min
    Li, Yuan
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 396 - 401
  • [4] Improving distinctiveness in video captioning with text-video similarity
    Velda, Vania
    Immanuel, Steve Andreas
    Hendria, Willy Fitra
    Jeong, Cheol
    IMAGE AND VISION COMPUTING, 2023, 136
  • [5] Video captioning with global and local text attention
    Yuqing Peng
    Chenxi Wang
    Yixin Pei
    Yingjun Li
    The Visual Computer, 2022, 38 : 4267 - 4278
  • [6] Video captioning with global and local text attention
    Peng, Yuqing
    Wang, Chenxi
    Pei, Yixin
    Li, Yingjun
    VISUAL COMPUTER, 2022, 38 (12) : 4267 - 4278
  • [7] Video captioning with text -based dynamic attention and step-by-step learning
    Xiao, Huanhou
    Shi, Jinglun
    PATTERN RECOGNITION LETTERS, 2020, 133 : 305 - 312
  • [8] Adaptive Curriculum Learning for Video Captioning
    Li, Shanhao
    Yang, Bang
    Zou, Yuexian
    IEEE ACCESS, 2022, 10 : 31751 - 31759
  • [9] Bridging Video and Text: A Two-Step Polishing Transformer for Video Captioning
    Xu, Wanru
    Miao, Zhenjiang
    Yu, Jian
    Tian, Yi
    Wan, Lili
    Ji, Qiang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (09) : 6293 - 6307
  • [10] Learning Comprehensive Visual Grounding for Video Captioning
    Jiang, Wenhui
    Liu, Linxin
    Fang, Yuming
    Cheng, Yibo
    Peng, Yuxin
    Liu, Yang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (04) : 3355 - 3367