Human-like Controllable Image Captioning with Verb-specific Semantic Roles

被引:42
作者
Chen, Long [2 ,3 ]
Jiang, Zhihong [1 ]
Xiao, Jun [1 ]
Liu, Wei [4 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Tencent AI Lab, Bellevue, WA USA
[3] Columbia Univ, New York, NY 10027 USA
[4] Tencent Data Platform, New York, NY USA
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 | 2021年
基金
中国国家自然科学基金; 浙江省自然科学基金;
关键词
D O I
10.1109/CVPR46437.2021.01657
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Controllable Image Captioning (CIC) - generating image descriptions following designated control signals- has received unprecedented attention over the last few years. To emulate the human ability in controlling caption generation, current CIC studies focus exclusively on control signals concerning objective properties, such as contents of interest or descriptive patterns. However, we argue that almost all existing objective control signals have overlooked two indispensable characteristics of an ideal control signal: 1) Event-compatible: all visual contents referred to in a single sentence should be compatible with the described activity. 2) Sample-suitable: the control signals should be suitable for a specific image sample. To this end, we propose a new control signal for CIC: Verb-specific Semantic Roles (VSR). VSR consists of a verb and some semantic roles, which represents a targeted activity and the roles of entities involved in this activity. Given a designated VSR, we first train a grounded semantic role labeling (GSRL) model to identify and ground all entities for each role. Then, we propose a semantic structure planner (SSP) to learn human-like descriptive semantic structures. Lastly, we use a roleshift captioning model to generate the captions. Extensive experiments and ablations demonstrate that our framework can achieve better controllability than several strong baselines on two challenging CIC benchmarks. Besides, we can generate multi-level diverse captions easily.
引用
收藏
页码:16841 / 16851
页数:11
相关论文
共 77 条
  • [41] Mathews A, 2016, AAAI CONF ARTIF INTE, P3574
  • [42] Mena Gonzalo, 2018, INT C LEARN REPR
  • [43] BLEU: a method for automatic evaluation of machine translation
    Papineni, K
    Roukos, S
    Ward, T
    Zhu, WJ
    [J]. 40TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, PROCEEDINGS OF THE CONFERENCE, 2002, : 311 - 318
  • [44] An integrated theory of language production and comprehension
    Pickering, Martin J.
    Garrod, Simon
    [J]. BEHAVIORAL AND BRAIN SCIENCES, 2013, 36 (04) : 329 - 347
  • [45] Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models
    Plummer, Bryan A.
    Wang, Liwei
    Cervantes, Chris M.
    Caicedo, Juan C.
    Hockenmaier, Julia
    Lazebnik, Svetlana
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 2641 - 2649
  • [46] Pont-Tuset Jordi, 2020, Computer Vision - ECCV 2020 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12350), P647, DOI 10.1007/978-3-030-58558-7_38
  • [47] Pourbabak H, 2019, WOODHEAD PUBL SER EN, P3, DOI 10.1016/B978-0-08-102207-8.00001-1
  • [48] Prabhu TR, 2017, IND INST MET SER, P3, DOI 10.1007/978-981-10-2134-3_1
  • [49] Pratt J, 2020, SABOTAGED: DREAMS OF UTOPIA IN TEXAS, P3
  • [50] Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
    Ren, Shaoqing
    He, Kaiming
    Girshick, Ross
    Sun, Jian
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (06) : 1137 - 1149