Cross-Domain Image Captioning with Discriminative Finetuning

被引:8
作者
Dessi, Roberto [1 ]
Bevilacqua, Michele [2 ]
Gualdoni, Eleonora [3 ]
Carraz Rakotonirina, Nathanael [3 ]
Franzon, Francesca [3 ]
Baroni, Marco [4 ]
机构
[1] UPF, Meta AI, Barcelona, Spain
[2] Samaya AI, Mountain View, CA USA
[3] UPF, Barcelona, Spain
[4] UPF, ICREA, Barcelona, Spain
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR | 2023年
基金
欧洲研究理事会;
关键词
D O I
10.1109/CVPR52729.2023.00670
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Neural captioners are typically trained to mimic human-generated references without optimizing for any specific communication goal, leading to problems such as the generation of vague captions. In this paper, we show that fine-tuning an out-of-the-box neural captioner with a self-supervised discriminative communication objective helps to recover a plain, visually descriptive language that is more informative about image contents. Given a target image, the system must learn to produce a description that enables an out-of-the-box text-conditioned image retriever to identify such image among a set of candidates. We experiment with the popular ClipCap captioner, also replicating the main results with BLIP. In terms of similarity to ground-truth human descriptions, the captions emerging from discriminative finetuning lag slightly behind those generated by the non-finetuned model, when the latter is trained and tested on the same caption dataset. However, when the model is used without further tuning to generate captions for out-of-domain datasets, our discriminatively-finetuned captioner generates descriptions that resemble human references more than those produced by the same captioner wihtout finetuning. We further show that, on the Conceptual Captions dataset, discriminatively finetuned captions are more helpful than either vanilla ClipCap captions or ground-truth captions for human annotators tasked with an image discrimination task.(1)
引用
收藏
页码:6935 / 6944
页数:10
相关论文
共 49 条
[1]   nocaps: novel object captioning at scale [J].
Agrawal, Harsh ;
Desai, Karan ;
Wang, Yufei ;
Chen, Xinlei ;
Jain, Rishabh ;
Johnson, Mark ;
Batra, Dhruv ;
Parikh, Devi ;
Lee, Stefan ;
Anderson, Peter .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :8947-8956
[2]   Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering [J].
Anderson, Peter ;
He, Xiaodong ;
Buehler, Chris ;
Teney, Damien ;
Johnson, Mark ;
Gould, Stephen ;
Zhang, Lei .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6077-6086
[3]   SPICE: Semantic Propositional Image Caption Evaluation [J].
Anderson, Peter ;
Fernando, Basura ;
Johnson, Mark ;
Gould, Stephen .
COMPUTER VISION - ECCV 2016, PT V, 2016, 9909 :382-398
[4]  
[Anonymous], 2005, The statistics of word cooccurrences: Word pairs and collocations
[5]  
[Anonymous], 2000, How children learn the meanings of words
[6]  
[Anonymous], 2021, P ICML
[7]  
[Anonymous], 2014, P 9 WORKSHOP STAT MA, DOI DOI 10.3115/V1/W14-3348
[8]  
[Anonymous], 2017, P NIPS
[9]   CaMEL: Mean Teacher Learning for Image Captioning [J].
Barraco, Manuele ;
Stefanini, Matteo ;
Cornia, Marcella ;
Cascianelli, Silvia ;
Baraldi, Lorenzo ;
Cucchiara, Rita .
2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, :4087-4094
[10]   Automatic Description Generation from Images: A Survey of Models, Datasets, and Evaluation Measures [J].
Bernardi, Raffaella ;
Cakici, Ruket ;
Elliott, Desmond ;
Erdem, Aykut ;
Erdem, Erkut ;
Ikizler-Cinbis, Nazli ;
Keller, Frank ;
Muscat, Adrian ;
Plank, Barbara .
JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2016, 55 :409-442