Unpaired Image Captioning With semantic-Constrained Self-Learning

被引:31
作者
Ben, Huixia [1 ]
Pan, Yingwei [2 ]
Li, Yehao [2 ]
Yao, Ting [2 ]
Hong, Richang [1 ]
Wang, Meng [1 ]
Mei, Tao [2 ]
机构
[1] Hefei Univ Technol, Sch Comp & Informat, Hefei 230009, Peoples R China
[2] JD AI Res, CV Lab, Beijing 100105, Peoples R China
基金
中国国家自然科学基金;
关键词
Semantics; Image recognition; Training; Visualization; Decoding; Task analysis; Dogs; Encoder-decoder networks; image captioning; self-supervised learning;
D O I
10.1109/TMM.2021.3060948
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Image captioning has been an emerging and fast-developing research topic. Nevertheless, most existing works heavily rely on large amounts of image-sentence pairs and therefore hinder the practical applications of captioning in the wild. In this paper, we present a novel Semantic-Constrained Self-learning (SCS) framework that explores an iterative self-learning strategy to learn an image captioner with only unpaired image and text data. Technically, SCS consists of two stages, i.e., pseudo pair generation and captioner re-training, iteratively producing "pseudo" image-sentence pairs via a pre-trained captioner and re-training the captioner with the pseudo pairs, respectively. Particularly, both stages are guided by the recognized objects in the image, that act as semantic constraint to strengthen the semantic alignment between the input image and the output sentence. We leverage a semantic-constrained beam search for pseudo pair generation to regularize the decoding process with the recognized objects via forcing the inclusion/exclusion of the recognized/irrelevant objects in output sentence. For captioner re-training, a self-supervised triplet loss is utilized to preserve the relative semantic similarity ordering among generated sentences with regard to the input image triplets. Moreover, an object inclusion reward and an adversarial reward are adopted to encourage the inclusion of the predicted objects in the output sentence and pursue the generation of more realistic sentences during self-critical training, respectively. Experiments conducted on both dependent and independent unpaired data validate the superiority of SCS. More remarkably, we obtain the best published CIDEr score to-date of 74.7\% on COCO Karpathy test split for unpaired image captioning.
引用
收藏
页码:904 / 916
页数:13
相关论文
共 50 条
[31]   StructCap: Structured Semantic Embedding for Image Captioning [J].
Chen, Fuhai ;
Ji, Rongrong ;
Su, Jinsong ;
Wu, Yongjian ;
Wu, Yunsheng .
PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), 2017, :46-54
[32]   Image Captioning Based on Visual and Semantic Attention [J].
Wei, Haiyang ;
Li, Zhixin ;
Zhang, Canlong .
MULTIMEDIA MODELING (MMM 2020), PT I, 2020, 11961 :151-162
[33]   A Context Semantic Auxiliary Network for Image Captioning [J].
Li, Jianying ;
Shao, Xiangjun .
INFORMATION, 2023, 14 (07)
[34]   Semantic interdisciplinary evaluation of image captioning models [J].
Sirisha, Uddagiri ;
Chandana, Bolem Sai .
COGENT ENGINEERING, 2022, 9 (01)
[35]   Integrating Scene Semantic Knowledge into Image Captioning [J].
Wei, Haiyang ;
Li, Zhixin ;
Huang, Feicheng ;
Zhang, Canlong ;
Ma, Huifang ;
Shi, Zhongzhi .
ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2021, 17 (02)
[36]   Deep Learning Approaches Based on Transformer Architectures for Image Captioning Tasks [J].
Castro, Roberto ;
Pineda, Israel ;
Lim, Wansu ;
Morocho-Cayamcela, Manuel Eugenio .
IEEE ACCESS, 2022, 10 :33679-33694
[37]   Semantic-Spatial Feature Fusion With Dynamic Graph Refinement for Remote Sensing Image Captioning [J].
Liu, Maofu ;
Liu, Jiahui ;
Zhang, Xiaokang .
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2025, 18 :15442-15455
[38]   Re-Caption: Saliency-Enhanced Image Captioning Through Two-Phase Learning [J].
Zhou, Lian ;
Zhang, Yuejie ;
Jiang, Yu-Gang ;
Zhang, Tao ;
Fan, Weiguo .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :694-709
[39]   Multi-Level Policy and Reward-Based Deep Reinforcement Learning Framework for Image Captioning [J].
Xu, Ning ;
Zhang, Hanwang ;
Liu, An-An ;
Nie, Weizhi ;
Su, Yuting ;
Nie, Jie ;
Zhang, Yongdong .
IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (05) :1372-1383
[40]   Image Captioning With Visual-Semantic Double Attention [J].
He, Chen ;
Hu, Haifeng .
ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2019, 15 (01)