Cascade Semantic Prompt Alignment Network for Image Captioning

被引:3
|
作者
Li, Jingyu [1 ]
Zhang, Lei [2 ]
Zhang, Kun [2 ]
Hu, Bo [2 ]
Xie, Hongtao [2 ]
Mao, Zhendong [1 ,3 ]
机构
[1] Univ Sci & Technol China, Sch Cyberspace Sci & Technol, Hefei 230022, Peoples R China
[2] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230022, Peoples R China
[3] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei 230022, Peoples R China
基金
中国国家自然科学基金;
关键词
Semantics; Visualization; Feature extraction; Detectors; Integrated circuit modeling; Transformers; Task analysis; Image captioning; textual-visual alignment; RegionCLIP; prompt; TRANSFORMER;
D O I
10.1109/TCSVT.2023.3343520
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Image captioning (IC) takes an image as input and generates open-form descriptions in the domain of natural language. IC requires the detection of objects, modeling of relations between them, an assessment of the semantics of the scene and representing the extracted knowledge in a language space. Previous detector-based models suffer from limited semantic perception capability due to predefined object detection classes and semantic inconsistency between visual region features and numeric labels of the detector. Inspired by the fact that text prompts in pre-trained multi-modal models contain specific linguistic knowledge rather than discrete labels, and excel at an open-form semantic understanding of visual inputs and their representation in the domain of natural language. We aim to distill and leverage the transferable language knowledge from the pre-trained RegionCLIP model to remedy the detector for generating rich image captioning. In this paper, we propose a novel Cascade Semantic Prompt Alignment Network (CSA-Net) to produce an aligned fine-grained regional semantic-visual space where rich and consistent textual semantic details are automatically incorporated to region features. Specifically, we first align the object semantic prompt and region features to produce semantic grounded object features. Then, we employ these object features and relation semantic prompt to predict the relations between objects. Finally, these enhanced object and relation features are fed into the language decoder, generating rich descriptions. Extensive experiments conducted on the MSCOCO dataset show that our method achieves a new state-of-the-art performance with 145.2% (single model) and 147.0% (ensemble of 4 models) CIDEr scores on the 'Karpathy' split, 141.6% (c5) and 144.1% (c40) CIDEr scores on the official online test server. Significantly, CSA-Net outperforms in generating captions with higher quality and diversity, achieving a RefCLIP-S score of 83.2. Moreover, we expand the testbeds to other challenging captioning benchmarks, i.e., nocaps datasets, CSA-Net demonstrates superior zero-shot capability. Source codes released at https://github.com/CrossmodalGroup/CSA-Net.
引用
收藏
页码:5266 / 5281
页数:16
相关论文
共 50 条
  • [41] PromptFusion: Harmonized Semantic Prompt Learning for Infrared and Visible Image Fusion
    Liu, Jinyuan
    Li, Xingyuan
    Wang, Zirui
    Jiang, Zhiying
    Zhong, Wei
    Fan, Wei
    Xu, Bin
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2025, 12 (03) : 502 - 515
  • [42] Visual Cluster Grounding for Image Captioning
    Jiang, Wenhui
    Zhu, Minwei
    Fang, Yuming
    Shi, Guangming
    Zhao, Xiaowei
    Liu, Yang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 3920 - 3934
  • [43] Image Captioning With Positional and Geometrical Semantics
    Ul Haque, Anwar
    Ghani, Sayeed
    Saeed, Muhammad
    IEEE ACCESS, 2021, 9 : 160917 - 160925
  • [44] Semantic association enhancement transformer with relative position for image captioning
    Xin Jia
    Yunbo Wang
    Yuxin Peng
    Shengyong Chen
    Multimedia Tools and Applications, 2022, 81 : 21349 - 21367
  • [45] Improving Image Captioning through Visual and Semantic Mutual Promotion
    Zhang, Jing
    Xie, Yingshuai
    Liu, Xiaoqiang
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4716 - 4724
  • [46] Semantic association enhancement transformer with relative position for image captioning
    Jia, Xin
    Wang, Yunbo
    Peng, Yuxin
    Chen, Shengyong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (15) : 21349 - 21367
  • [47] Image captioning: Semantic selection unit with stacked residual attention
    Song, Lifei
    Li, Fei
    Wang, Ying
    Liu, Yu
    Wang, Yuanhua
    Xiang, Shiming
    IMAGE AND VISION COMPUTING, 2024, 144
  • [48] Graph-based image captioning with semantic and spatial features
    Parseh, Mohammad Javad
    Ghadiri, Saeed
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2025, 133
  • [49] Multi-Branch Distance-Sensitive Self-Attention Network for Image Captioning
    Ji, Jiayi
    Huang, Xiaoyang
    Sun, Xiaoshuai
    Zhou, Yiyi
    Luo, Gen
    Cao, Liujuan
    Liu, Jianzhuang
    Shao, Ling
    Ji, Rongrong
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 3962 - 3974
  • [50] Changes to Captions: An Attentive Network for Remote Sensing Change Captioning
    Chang, Shizhen
    Ghamisi, Pedram
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 6047 - 6060