Vision-Enhanced and Consensus-Aware Transformer for Image Captioning

被引:28
作者
Cao, Shan [1 ,2 ]
An, Gaoyun [1 ,2 ]
Zheng, Zhenxing [1 ,2 ]
Wang, Zhiyong [3 ]
机构
[1] Beijing Jiaotong Univ, Inst Informat Sci, Beijing 100044, Peoples R China
[2] Beijing Key Lab Adv Informat Sci & Network Techno, Beijing 100044, Peoples R China
[3] Univ Sydney, Sch Comp Sci, Sydney, NSW 2006, Australia
基金
中国国家自然科学基金;
关键词
Transformers; Visualization; Decoding; Semantics; Task analysis; Convolution; Visual perception; Image captioning; vision-enhanced encoder; consensus-aware decoder; consensus knowledge;
D O I
10.1109/TCSVT.2022.3178844
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Image captioning generates descriptions in a natural language for a given image. Due to its great potential for a wide range of applications, many deep learning based-methods have been proposed. The co-occurrence of words such as mouse and keyboard, constitutes commonsense knowledge, which is referred to as consensus. However, it is challenging to consider commonsense knowledge in producing captions that have rich, natural, and meaningful semantics. In this paper, a Vision-enhanced and Consensus-aware Transformer (VCT) is proposed to exploit both visual information and consensus knowledge for image captioning with three key components: a vision-enhanced encoder, consensus-aware knowledge representation generator, and consensus-aware decoder. The vision-enhanced encoder extends the vanilla self-attention module with a memory-based attention module and a visual perception module for learning better visual representation of an image. Specifically, the relationships between regions in an image and the image's global context are leveraged with scene memory in the memory-based attention module. The visual perception module further enhances the correlation among neighboring tokens in both the spatial and channel-wise dimensions. To learn consensus-aware representations, a word correlation graph is constructed by computing the statistical co-occurrence between semantic concepts. Then consensus knowledge can be acquired using a graph convolutional network in the consensus-aware knowledge representation generator. Finally, such consensus knowledge is integrated into the consensus-aware decoder through consensus memory and a knowledge-based control module to produce a caption. Experimental results on two popular benchmark datasets (MSCOCO and Flickr30k) demonstrate that our proposed model achieves state-of-the-art performance. Extensive ablation studies also validate the effectiveness of each component.
引用
收藏
页码:7005 / 7018
页数:14
相关论文
共 80 条
  • [1] Image Understanding using vision and reasoning through Scene Description Graph
    Aditya, Somak
    Yang, Yezhou
    Baral, Chitta
    Aloimonos, Yiannis
    Fermueller, Cornelia
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2018, 173 : 33 - 45
  • [2] Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering
    Anderson, Peter
    He, Xiaodong
    Buehler, Chris
    Teney, Damien
    Johnson, Mark
    Gould, Stephen
    Zhang, Lei
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 6077 - 6086
  • [3] SPICE: Semantic Propositional Image Caption Evaluation
    Anderson, Peter
    Fernando, Basura
    Johnson, Mark
    Gould, Stephen
    [J]. COMPUTER VISION - ECCV 2016, PT V, 2016, 9909 : 382 - 398
  • [4] Convolutional Image Captioning
    Aneja, Jyoti
    Deshpande, Aditya
    Schwing, Alexander G.
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 5561 - 5570
  • [5] Entity Slot Filling for Visual Captioning
    Bin, Yi
    Ding, Yujuan
    Peng, Bo
    Peng, Liang
    Yang, Yang
    Chua, Tat-Seng
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (01) : 52 - 62
  • [6] Leveraging unpaired out -of -domain data for image captioning
    Chen, Xinghan
    Zhang, Mingxing
    Wang, Zheng
    Zuo, Lin
    Li, Bo
    Yang, Yang
    [J]. PATTERN RECOGNITION LETTERS, 2020, 132 : 132 - 140
  • [7] Cornia M, 2020, PROC CVPR IEEE, P10575, DOI 10.1109/CVPR42600.2020.01059
  • [8] Denkowski M., 2014, Proceedings of the ninth workshop on statistical machine translation, P376, DOI DOI 10.3115/V1/W14-3348
  • [9] Devlin J, 2019, Arxiv, DOI arXiv:1810.04805
  • [10] Dosovitskiy A., 2021, INT C LEARNING REPRE