Comprehending and Ordering Semantics for Image Captioning

被引:66
|
作者
Li, Yehao [1 ]
Pan, Yingwei [1 ]
Yao, Ting [1 ]
Mei, Tao [1 ]
机构
[1] JD Explore Acad, Beijing, Peoples R China
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022) | 2022年
基金
国家重点研发计划;
关键词
D O I
10.1109/CVPR52688.2022.01746
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Comprehending the rich semantics in an image and ordering them in linguistic order are essential to compose a visually-grounded and linguistically coherent description for image captioning. Modern techniques commonly capitalize on a pre-trained object detector/classifier to mine the semantics in an image, while leaving the inherent linguistic ordering of semantics under-exploited. In this paper, we propose a new recipe of Transformer-style structure, namely Comprehending and Ordering Semantics Networks (COS-Net), that novelly unifies an enriched semantic comprehending and a learnable semantic ordering processes into a single architecture. Technically, we initially utilize a cross-modal retrieval model to search the relevant sentences of each image, and all words in the searched sentences are taken as primary semantic cues. Next, a novel semantic comprehender is devised to filter out the irrelevant semantic words in primary semantic cues, and mean-while infer the missing relevant semantic words visually grounded in the image. After that, we feed all the screened and enriched semantic words into a semantic ranker, which learns to allocate all semantic words in linguistic order as humans. Such sequence of ordered semantic words are further integrated with visual tokens of images to trigger sentence generation. Empirical evidences show that COS-Net clearly surpasses the state-of-the-art approaches on COCO and achieves to-date the best CIDEr score of 141.1% on Karpathy test split. Source code is available at https://github.com/YehLi/xmodaler/tree/master/configs/image_caption/cosnet.
引用
收藏
页码:17969 / 17978
页数:10
相关论文
共 50 条
  • [41] Deep Image Captioning: An Overview
    Hrga, I.
    Ivasic-Kos, M.
    2019 42ND INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), 2019, : 995 - 1000
  • [42] Image Captioning by Asking Questions
    Yang, Xiaoshan
    Xu, Changsheng
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2019, 15 (02)
  • [43] Boosted Transformer for Image Captioning
    Li, Jiangyun
    Yao, Peng
    Guo, Longteng
    Zhang, Weicun
    APPLIED SCIENCES-BASEL, 2019, 9 (16):
  • [44] Image Captioning with Semantic Attention
    You, Quanzeng
    Jin, Hailin
    Wang, Zhaowen
    Fang, Chen
    Luo, Jiebo
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 4651 - 4659
  • [45] Comparative Study on Image Captioning
    Patel, Hardik K.
    Rathod, Jagdish M.
    INTERNATIONAL JOURNAL OF NEXT-GENERATION COMPUTING, 2022, 13 (04): : 874 - 884
  • [46] IMAGE CAPTIONING WITH ATTRIBUTE REFINEMENT
    Huang, Yiqing
    Li, Cong
    Li, Tianpeng
    Wan, Weitao
    Chen, Jiansheng
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 1820 - 1824
  • [47] Image Captioning Methods and Metrics
    Sargar, Omkar
    Kinger, Shakti
    2021 INTERNATIONAL CONFERENCE ON EMERGING SMART COMPUTING AND INFORMATICS (ESCI), 2021, : 522 - 526
  • [48] Dense Image Captioning in Hindi
    Gill, Karanjit
    Saha, Sriparna
    Mishra, Santosh Kumar
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 2894 - 2899
  • [49] Areas of Attention for Image Captioning
    Pedersoli, Marco
    Lucas, Thomas
    Schmid, Cordelia
    Verbeek, Jakob
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 1251 - 1259
  • [50] Research Progress on Image Captioning
    Li Z.
    Wei H.
    Zhang C.
    Ma H.
    Shi Z.
    1951, Science Press (58): : 1951 - 1974