Image caption generation using a dual attention mechanism

被引:7
作者
Padate, Roshni [1 ]
Jain, Amit [1 ]
Kalla, Mukesh [1 ]
Sharma, Arvind [2 ]
机构
[1] Sir padampat singhania Univ, Dept Comp Sci & Engn, Bhatewar, Rajasthan, India
[2] Sir padampat singhania Univ, Dept Math, Bhatewar, Rajasthan, India
关键词
Image captioning; Inception v3; CNN; BI-LSTM; SI-EFO optimization; Meteor score; SEMANTIC ATTENTION; ALGORITHM; NETWORK; MODEL; CNN;
D O I
10.1016/j.engappai.2023.106112
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In order to create a statement that accurately captures the main idea of an ambiguous visual, which is said to be a significant and demanding task? Conventional image captioning schemes are categorized into 2 classes: retrieval-oriented schemes and generation-oriented schemes. The image caption generating system should provide precise, fluid, natural, and informative phrases as well as accurately identify the content of the image, such as scene, object, relationship, and properties of the object in the image. However, it can be challenging to accurately express the image's content when creating image captions because not all visual information can be used. In this article, a new image captioning model is introduced that includes 3 main phases like (1) Extraction of Inception V3 features (2) Dual (Visual and Textual) attention generation and (3) generation of image caption. Convolutional Neural Network (CNN) is used to generate visual attention after first deriving initial V3 features. The input texts for the associated images, on the other hand, are analyzed and given to LSTM for the creation of textual attention. To create image captions, Bidirectional LSTM (BI-LSTM) is used to combine textual and visual attention. The Self Improved Electric Fish Optimization (SI-EFO) algorithm is used in particular to optimize the weights of the BI-LSTM. In the end, several measures confirm that the implemented system has improved. The adopted model is 35.21%, 33.76%, 39.52%, 29.69%, 30.12%, 21.49%, and 31.71% better than GAN-RL, LSTM, GRU, EC + GOA, EC + CMBO, EC + DA, EC + EFO models.
引用
收藏
页数:13
相关论文
共 48 条
  • [1] Context-based information generation for managing UAV-acquired data using image captioning
    Bang, Seongdeok
    Kim, Hyoungkwan
    [J]. AUTOMATION IN CONSTRUCTION, 2020, 112
  • [2] Bockrath S, 2019, IEEE IND ELEC, P2507, DOI 10.1109/IECON.2019.8926815
  • [3] Interactions Guided Generative Adversarial Network for unsupervised image captioning
    Cao, Shan
    An, Gaoyun
    Zheng, Zhenxing
    Ruan, Qiuqi
    [J]. NEUROCOMPUTING, 2020, 417 : 419 - 431
  • [4] Leveraging unpaired out -of -domain data for image captioning
    Chen, Xinghan
    Zhang, Mingxing
    Wang, Zheng
    Zuo, Lin
    Li, Bo
    Yang, Yang
    [J]. PATTERN RECOGNITION LETTERS, 2020, 132 : 132 - 140
  • [5] Resolving vision and language ambiguities together: Joint segmentation & prepositional attachment resolution in captioned scenes
    Christie, Gordon
    Laddha, Ankit
    Agrawal, Aishwarya
    Antol, Stanislaw
    Goyal, Yash
    Kochersberger, Kevin
    Batra, Dhruv
    [J]. COMPUTER VISION AND IMAGE UNDERSTANDING, 2017, 163 : 101 - 112
  • [6] Cat and Mouse Based Optimizer: A New Nature-Inspired Optimization Algorithm
    Dehghani, Mohammad
    Hubalovsky, Stepan
    Trojovsky, Pavel
    [J]. SENSORS, 2021, 21 (15)
  • [7] Image captioning using DenseNet network and adaptive attention
    Deng, Zhenrong
    Jiang, Zhouqin
    Lan, Rushi
    Huang, Wenming
    Luo, Xiaonan
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2020, 85
  • [8] Deepdiary: Lifelogging image captioning and summarization
    Fan, Chenyou
    Zhang, Zehua
    Crandall, David J.
    [J]. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2018, 55 : 40 - 55
  • [9] Repeated review based image captioning for image evidence review
    Guan, Jinning
    Wang, Eric
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2018, 63 : 141 - 148
  • [10] VD-SAN: Visual-Densely Semantic Attention Network for Image Caption Generation
    He, Xinwei
    Yang, Yang
    Shi, Baoguang
    Bai, Xiang
    [J]. NEUROCOMPUTING, 2019, 328 : 48 - 55