Human-like Controllable Image Captioning with Verb-specific Semantic Roles

被引:42
作者
Chen, Long [2 ,3 ]
Jiang, Zhihong [1 ]
Xiao, Jun [1 ]
Liu, Wei [4 ]
机构
[1] Zhejiang Univ, Hangzhou, Peoples R China
[2] Tencent AI Lab, Bellevue, WA USA
[3] Columbia Univ, New York, NY 10027 USA
[4] Tencent Data Platform, New York, NY USA
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 | 2021年
基金
中国国家自然科学基金; 浙江省自然科学基金;
关键词
D O I
10.1109/CVPR46437.2021.01657
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Controllable Image Captioning (CIC) - generating image descriptions following designated control signals- has received unprecedented attention over the last few years. To emulate the human ability in controlling caption generation, current CIC studies focus exclusively on control signals concerning objective properties, such as contents of interest or descriptive patterns. However, we argue that almost all existing objective control signals have overlooked two indispensable characteristics of an ideal control signal: 1) Event-compatible: all visual contents referred to in a single sentence should be compatible with the described activity. 2) Sample-suitable: the control signals should be suitable for a specific image sample. To this end, we propose a new control signal for CIC: Verb-specific Semantic Roles (VSR). VSR consists of a verb and some semantic roles, which represents a targeted activity and the roles of entities involved in this activity. Given a designated VSR, we first train a grounded semantic role labeling (GSRL) model to identify and ground all entities for each role. Then, we propose a semantic structure planner (SSP) to learn human-like descriptive semantic structures. Lastly, we use a roleshift captioning model to generate the captions. Extensive experiments and ablations demonstrate that our framework can achieve better controllability than several strong baselines on two challenging CIC benchmarks. Besides, we can generate multi-level diverse captions easily.
引用
收藏
页码:16841 / 16851
页数:11
相关论文
共 77 条
  • [21] Towards Diverse and Natural Image Descriptions via a Conditional GAN
    Dai, Bo
    Fidler, Sanja
    Urtasun, Raquel
    Lin, Dahua
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 2989 - 2998
  • [22] Dollar P., 2015, CoRR
  • [23] Every Picture Tells a Story: Generating Sentences from Images
    Farhadi, Ali
    Hejrati, Mohsen
    Sadeghi, Mohammad Amin
    Young, Peter
    Rashtchian, Cyrus
    Hockenmaier, Julia
    Forsyth, David
    [J]. COMPUTER VISION-ECCV 2010, PT IV, 2010, 6314 : 15 - +
  • [24] StyleNet: Generating Attractive Visual Captions with Styles
    Gan, Chuang
    Gan, Zhe
    He, Xiaodong
    Gao, Jianfeng
    Deng, Li
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 955 - 964
  • [25] Gupta Saurabh, 2015, ARXIV150504474
  • [26] Jiang L, 2018, CHIN ACAD LIBRARY, P3, DOI 10.1007/978-3-662-56352-6_1
  • [27] Karpathy A, 2015, PROC CVPR IEEE, P3128, DOI 10.1109/CVPR.2015.7298932
  • [28] Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations
    Krishna, Ranjay
    Zhu, Yuke
    Groth, Oliver
    Johnson, Justin
    Hata, Kenji
    Kravitz, Joshua
    Chen, Stephanie
    Kalantidis, Yannis
    Li, Li-Jia
    Shamma, David A.
    Bernstein, Michael S.
    Li Fei-Fei
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2017, 123 (01) : 32 - 73
  • [29] Ways of looking ahead: Hierarchical planning in language production
    Lee, Eun-Kyung
    Brown-Schmidt, Sarah
    Watson, Duane G.
    [J]. COGNITION, 2013, 129 (03) : 544 - 562
  • [30] Levin Beth., 1993, English verb classes and alternations: A preliminary investigation