High-Order Interaction Learning for Image Captioning

被引:68
作者
Wang, Yanhui [1 ]
Xu, Ning [1 ]
Liu, An-An [1 ]
Li, Wenhui [1 ]
Zhang, Yongdong [2 ]
机构
[1] Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China
[2] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230052, Peoples R China
基金
中国国家自然科学基金;
关键词
Visualization; Semantics; Feature extraction; Decoding; Task analysis; Ions; Encoding; Image captioning; high-order interaction; encoder-decoder framework;
D O I
10.1109/TCSVT.2021.3121062
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Image captioning aims at understanding various semantic concepts (e.g., objects and relationships) from an image and integrating them in a sentence-level description. Hence, it is necessary to learn the interaction among these concepts. If we define the context of the interaction to be involved in the subject-predicate-object triplet, most current methods only focus on the single triplet for the first-order interaction to generate sentences. Intuitively, we humans are able to perceive the high-order interaction among concepts from two or more triplets to describe an image. For example, when we see the triplets man-cutting-sandwich and man-with-knife, it is natural to integrate and predict the sentence man cutting sandwich with knife. This depends on the high-order interaction between cutting and knife in different triplets. Therefore, exploiting high-order interaction is expected to benefit image captioning and focus on reasoning. In this paper, we introduce the novel high-order interaction learning method over detected objects and relationships for image captioning under the umbrella of the encoder-decoder framework. We first extract a set of object and relationship features in an image. During the encoding stage, the interactive refining network is proposed to learn high-order representations by modeling intra- and inter-object feature interaction in the self-attention fashion. During the decoding stage, the interactive fusion network is proposed to integrate object and relationship information by strengthening their high-order interaction based on language context for sentence generation. In this way, we learn the object-relationship dependencies in different stages, which can provide abundant cues for both visual understanding and caption generation. Extensive experiments show that the proposed method can achieve competitive performances against the state-of-the-art methods on MSCOCO dataset. Additional ablation studies further validate its effectiveness.
引用
收藏
页码:4417 / 4430
页数:14
相关论文
共 50 条
  • [21] Text to Image Synthesis for Improved Image Captioning
    Hossain, Md. Zakir
    Sohel, Ferdous
    Shiratuddin, Mohd Fairuz
    Laga, Hamid
    Bennamoun, Mohammed
    [J]. IEEE ACCESS, 2021, 9 : 64918 - 64928
  • [22] A Sparse Transformer-Based Approach for Image Captioning
    Lei, Zhou
    Zhou, Congcong
    Chen, Shengbo
    Huang, Yiyong
    Liu, Xianrui
    [J]. IEEE ACCESS, 2020, 8 : 213437 - 213446
  • [23] Show, Tell, and Polish: Ruminant Decoding for Image Captioning
    Guo, Longteng
    Liu, Jing
    Lu, Shichen
    Lu, Hanqing
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (08) : 2149 - 2162
  • [24] High-Order Semantic Decoupling Network for Remote Sensing Image Semantic Segmentation
    Zheng, Chengyu
    Nie, Jie
    Wang, Zhaoxin
    Song, Ning
    Wang, Jingyu
    Wei, Zhiqiang
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [25] TridentCap: Image-Fact-Style Trident Semantic Framework for Stylized Image Captioning
    Wang, Lanxiao
    Qiu, Heqian
    Qiu, Benliu
    Meng, Fanman
    Wu, Qingbo
    Li, Hongliang
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (05) : 3563 - 3575
  • [26] Exploring Transformer and Multilabel Classification for Remote Sensing Image Captioning
    Kandala, Hitesh
    Saha, Sudipan
    Banerjee, Biplab
    Zhu, Xiao Xiang
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [27] Truncation Cross Entropy Loss for Remote Sensing Image Captioning
    Li, Xuelong
    Zhang, Xueting
    Huang, Wei
    Wang, Qi
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2021, 59 (06): : 5246 - 5257
  • [28] Visual Cluster Grounding for Image Captioning
    Jiang, Wenhui
    Zhu, Minwei
    Fang, Yuming
    Shi, Guangming
    Zhao, Xiaowei
    Liu, Yang
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 3920 - 3934
  • [29] GLCM: Global-Local Captioning Model for Remote Sensing Image Captioning
    Wang, Qi
    Huang, Wei
    Zhang, Xueting
    Li, Xuelong
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (11) : 6910 - 6922
  • [30] Multimodal Transformer With Multi-View Visual Representation for Image Captioning
    Yu, Jun
    Li, Jing
    Yu, Zhou
    Huang, Qingming
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (12) : 4467 - 4480