Hypernymization of named entity-rich captions for grounding-based multi-modal pretraining

被引:0
作者
Nebbia, Giacomo [1 ]
Kovashka, Adriana [1 ]
机构
[1] Univ Pittsburgh, Pittsburgh, PA USA
来源
PROCEEDINGS OF THE 2023 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2023 | 2023年
基金
美国国家科学基金会;
关键词
grounding; hypernymization; named entities; open-vocabulary detection;
D O I
10.1145/3591106.3592223
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Named entities are ubiquitous in text that naturally accompanies images, especially in domains such as news or Wikipedia articles. In previous work, named entities have been identified as a likely reason for low performance of image-text retrieval models pretrained on Wikipedia and evaluated on named entities-free benchmark datasets. Because they are rarely mentioned, named entities could be challenging to model. They also represent missed learning opportunities for self-supervised models: the link between named entity and object in the image may be missed by the model, but it would not be if the object were mentioned using a more common term. In this work, we investigate hypernymization as a way to deal with named entities for pretraining grounding-based multi-modal models and for fine-tuning on open-vocabulary detection. We propose two ways to perform hypernymization: (1) a "manual" pipeline relying on a comprehensive ontology of concepts, and (2) a "learned" approach where we train a language model to learn to perform hypernymization. We run experiments on data from Wikipedia and from The New York Times. We report improved pretraining performance on objects of interest following hypernymization, and we show the promise of hypernymization on open-vocabulary detection, specifically on classes not seen during training.
引用
收藏
页码:67 / 75
页数:9
相关论文
共 39 条
[31]  
Su W., 2020, INT C LEARNING REPRE
[32]   Transform and Tell: Entity-Aware News Image Captioning [J].
Tran, Alasdair ;
Mathews, Alexander ;
Xie, Lexing .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :13032-13042
[33]  
Unal Mesut Erhan, 2022, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
[34]  
Wei Fangyun, 2021, Advances in Neural Information Processing Systems, V34
[35]  
Wolf T, 2020, PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING: SYSTEM DEMONSTRATIONS, P38
[36]  
Yuan L., 2021, arXiv, DOI DOI 10.48550/ARXIV.2111.11432
[37]   Open-Vocabulary Object Detection Using Captions [J].
Zareian, Alireza ;
Dela Rosa, Kevin ;
Hu, Derek Hao ;
Chang, Shih-Fu .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :14388-14397
[38]   RegionCLIP: Region-based Language-Image Pretraining [J].
Zhong, Yiwu ;
Yang, Jianwei ;
Zhang, Pengchuan ;
Li, Chunyuan ;
Codella, Noel ;
Li, Liunian Harold ;
Zhou, Luowei ;
Dai, Xiyang ;
Yuan, Lu ;
Li, Yin ;
Gao, Jianfeng .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :16772-16782
[39]   DAP: Detection-Aware Pre-training with Weak Supervision [J].
Zhong, Yuanyi ;
Wang, Jianfeng ;
Wang, Lijuan ;
Peng, Jian ;
Wang, Yu-Xiong ;
Zhang, Lei .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :4535-4544