Control With Style: Style Embedding-Based Variational Autoencoder for Controlled Stylized Caption Generation Framework

被引:0
作者
Sharma, Dhruv [1 ]
Dhiman, Chhavi [1 ]
Kumar, Dinesh [1 ]
机构
[1] Delhi Technol Univ, Dept Elect & Commun Engn, Delhi 110042, India
关键词
Visualization; Task analysis; Long short term memory; Decoding; Adaptation models; Transformers; Generators; Bag of captions (BoCs); computer vision; controlled text generation; image captioning; natural language processing; smooth maximum unit (SMU); stylized image captioning; variational autoencoder (VAE);
D O I
10.1109/TCDS.2024.3405573
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automatic image captioning is a computationally intensive and structurally complicated task that describes the contents of an image in the form of a natural language sentence. Methods developed in the recent past focused mainly on the description of factual content in images thereby ignoring the different emotions and styles (romantic, humorous, angry, etc.) associated with the image. To overcome this, few works incorporated style-based caption generation that captures the variability in the generated descriptions. This article presents a style embedding-based variational autoencoder for controlled stylized caption generation framework (RFCG+SE-VAE-CSCG). It generates controlled text-based stylized descriptions of images. It works in two phases, i.e., $ 1)$ refined factual caption generation (RFCG); and $ 2)$ SE-VAE-CSCG. The former defines an encoder-decoder model for the generation of refined factual captions. Whereas, the latter presents a SE-VAE for controlled stylized caption generation. The overall proposed framework generates style-based descriptions of images by leveraging bag of captions (BoCs). More so, with the use of a controlled text generation model, the proposed work efficiently learns disentangled representations and generates realistic stylized descriptions of images. Experiments on MSCOCO, Flickr30K, and FlickrStyle10K provide state-of-the-art results for both refined and style-based caption generation, supported with an ablation study.
引用
收藏
页码:2032 / 2042
页数:11
相关论文
共 59 条
  • [1] Aloimono Y., 2011, P EMP METH NAT LANG
  • [2] Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering
    Anderson, Peter
    He, Xiaodong
    Buehler, Chris
    Teney, Damien
    Johnson, Mark
    Gould, Stephen
    Zhang, Lei
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 6077 - 6086
  • [3] [Anonymous], 2017, PMLR
  • [4] Banerjee S., 2005, P ACL WORKSH INTR EX, P65, DOI DOI 10.3115/1626355.1626389
  • [5] Betti F., 2020, P 13 INT C NAT LANG, P29
  • [6] Bianco S, 2023, Arxiv, DOI arXiv:2306.11593
  • [7] Biswas K, 2022, Arxiv, DOI arXiv:2111.04682
  • [8] Bochkovskiy A, 2020, Arxiv, DOI [arXiv:2004.10934, 10.48550/arXiv.2004.10934]
  • [9] Bowman Samuel R., 2016, P 20 SIGNLL C COMPUT, P10, DOI [DOI 10.18653/V1/K16-1002, 10.18653/v1/K16-1002]
  • [10] "Factual" or "Emotional": Stylized Image Captioning with Adaptive Learning and Attention
    Chen, Tianlang
    Zhang, Zhongping
    You, Quanzeng
    Fang, Chen
    Wang, Zhaowen
    Jin, Hailin
    Luo, Jiebo
    [J]. COMPUTER VISION - ECCV 2018, PT X, 2018, 11214 : 527 - 543