Unpaired Image Captioning With semantic-Constrained Self-Learning

被引:31
作者
Ben, Huixia [1 ]
Pan, Yingwei [2 ]
Li, Yehao [2 ]
Yao, Ting [2 ]
Hong, Richang [1 ]
Wang, Meng [1 ]
Mei, Tao [2 ]
机构
[1] Hefei Univ Technol, Sch Comp & Informat, Hefei 230009, Peoples R China
[2] JD AI Res, CV Lab, Beijing 100105, Peoples R China
基金
中国国家自然科学基金;
关键词
Semantics; Image recognition; Training; Visualization; Decoding; Task analysis; Dogs; Encoder-decoder networks; image captioning; self-supervised learning;
D O I
10.1109/TMM.2021.3060948
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Image captioning has been an emerging and fast-developing research topic. Nevertheless, most existing works heavily rely on large amounts of image-sentence pairs and therefore hinder the practical applications of captioning in the wild. In this paper, we present a novel Semantic-Constrained Self-learning (SCS) framework that explores an iterative self-learning strategy to learn an image captioner with only unpaired image and text data. Technically, SCS consists of two stages, i.e., pseudo pair generation and captioner re-training, iteratively producing "pseudo" image-sentence pairs via a pre-trained captioner and re-training the captioner with the pseudo pairs, respectively. Particularly, both stages are guided by the recognized objects in the image, that act as semantic constraint to strengthen the semantic alignment between the input image and the output sentence. We leverage a semantic-constrained beam search for pseudo pair generation to regularize the decoding process with the recognized objects via forcing the inclusion/exclusion of the recognized/irrelevant objects in output sentence. For captioner re-training, a self-supervised triplet loss is utilized to preserve the relative semantic similarity ordering among generated sentences with regard to the input image triplets. Moreover, an object inclusion reward and an adversarial reward are adopted to encourage the inclusion of the predicted objects in the output sentence and pursue the generation of more realistic sentences during self-critical training, respectively. Experiments conducted on both dependent and independent unpaired data validate the superiority of SCS. More remarkably, we obtain the best published CIDEr score to-date of 74.7\% on COCO Karpathy test split for unpaired image captioning.
引用
收藏
页码:904 / 916
页数:13
相关论文
共 53 条
[1]  
Anderson P., 2017, P 2017 C EMPIRICAL M, P936, DOI 10.18653/v1/D17-1098
[2]   Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering [J].
Anderson, Peter ;
He, Xiaodong ;
Buehler, Chris ;
Teney, Damien ;
Johnson, Mark ;
Gould, Stephen ;
Zhang, Lei .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6077-6086
[3]  
Artetxe Mikel, 2018, Unsupervised neural machine translation, DOI DOI 10.18653/V1/D18-1399
[4]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[5]   Video-Based Cross-Modal Recipe Retrieval [J].
Cao, Da ;
Yu, Zhiwang ;
Zhang, Hanling ;
Fang, Jiansheng ;
Nie, Liqiang ;
Tian, Qi .
PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, :1685-1693
[6]   Cross-Modal Retrieval in the Cooking Context: Learning Semantic Text-Image Embeddings [J].
Carvalho, Micael ;
Cadene, Remi ;
Picard, David ;
Soulier, Laure ;
Thome, Nicolas ;
Cord, Matthieu .
ACM/SIGIR PROCEEDINGS 2018, 2018, :35-44
[7]  
Donahue J, 2015, PROC CVPR IEEE, P2625, DOI 10.1109/CVPR.2015.7298878
[8]  
Fang H, 2015, PROC CVPR IEEE, P1473, DOI 10.1109/CVPR.2015.7298754
[9]   Unsupervised Image Captioning [J].
Feng, Yang ;
Ma, Lin ;
Liu, Wei ;
Luo, Jiebo .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4120-4129
[10]   Video Captioning With Attention-Based LSTM and Semantic Consistency [J].
Gao, Lianli ;
Guo, Zhao ;
Zhang, Hanwang ;
Xu, Xing ;
Shen, Heng Tao .
IEEE TRANSACTIONS ON MULTIMEDIA, 2017, 19 (09) :2045-2055