Beyond Generic: Enhancing Image Captioning with Real-World Knowledge using Vision-Language Pre-Training Model

被引:2
|
作者
Cheng, Kanzhi [1 ]
Song, Wenpo [1 ]
Ma, Zheng [1 ]
Zhu, Wenhao [1 ]
Zhu, Zixuan [2 ]
Zhang, Jianbing [1 ]
机构
[1] Nanjing Univ, Natl Key Lab Novel Software Technol, Nanjing, Peoples R China
[2] Univ Glasgow, Glasgow, Lanark, Scotland
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
关键词
Image Captioning; Vision-Language Pre-Training; Knowledge;
D O I
10.1145/3581783.3611987
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
descriptions that lack real-world knowledge, e.g., named entities and contextual information. Considering that Vision-Language Pre-Training (VLP) models master massive such knowledge from large-scale web-harvested data, it is promising to utilize the generalizability of VLP models to incorporate knowledge into image descriptions. However, using VLP models faces challenges: zero-shot inference suffers from knowledge hallucination that leads to low-quality descriptions, but the generic bias in downstream task fine-tuning hinders the VLP model from expressing knowledge. To address these concerns, we propose a simple yet effective method called Knowledge-guided Replay (K-Replay), which enables the retention of pre-training knowledge during fine-tuning. Our approach consists of two parts: (1) a knowledge prediction task on automatically collected replay exemplars to continuously awaken the VLP model's memory about knowledge, thus preventing the model from collapsing into the generic pattern; (2) a knowledge distillation constraint to improve the faithfulness of generated descriptions hence alleviating the knowledge hallucination. To evaluate knowledge-enhanced descriptions, we construct a novel captioning benchmark KnowCap, containing knowledge of landmarks, famous brands, special foods and movie characters. Experimental results show that our approach effectively incorporates knowledge into descriptions, outperforming strong VLP baseline by 20.9 points (78.7 -> 99.6) in CIDEr score and 20.5 percentage points (34.0%-> 54.5%) in knowledge recognition accuracy. Our code and data is available at https://github.com/njucckevin/KnowCap.
引用
收藏
页码:5038 / 5047
页数:10
相关论文
共 25 条
  • [21] Uni-EDEN: Universal Encoder-Decoder Network by Multi-Granular Vision-Language Pre-training
    Li, Yehao
    Fan, Jiahao
    Pan, Yingwei
    Yao, Ting
    Lin, Weiyao
    Mei, Tao
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (02)
  • [22] Auto-captions on GIF: A Large-scale Video-sentence Dataset for Vision-language Pre-training
    Pan, Yingwei
    Li, Yehao
    Luo, Jianjie
    Xu, Jun
    Yao, Ting
    Tao Mei
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 7070 - 7074
  • [23] VisualSiteDiary: A detector-free Vision-Language Transformer model for captioning photologs for daily construction reporting and image retrievals
    Jung, Yoonhwa
    Cho, Ikhyun
    Hsu, Shun-Hsiang
    Golparvar-Fard, Mani
    AUTOMATION IN CONSTRUCTION, 2024, 165
  • [24] A Lightweight Enhancement Approach for Real-Time Semantic Segmentation by Distilling Rich Knowledge from Pre-Trained Vision-Language Model
    Lin, Chia-Yi
    Chen, Jun-Cheng
    Wu, Ja-Ling
    APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2024, 13 (05)
  • [25] PMC-CLIP: Contrastive Language-Image Pre-training Using Biomedical Documents
    Lin, Weixiong
    Zhao, Ziheng
    Zhang, Xiaoman
    Wu, Chaoyi
    Zhang, Ya
    Wang, Yanfeng
    Xie, Weidi
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT VIII, 2023, 14227 : 525 - 536