A Deep Local and Global Scene-Graph Matching for Image-Text Retrieval

被引:5
作者
Manh-Duy Nguyen [1 ]
Binh T Nguyen [2 ,3 ,4 ]
Cathal Gurrin [1 ]
机构
[1] Sch Comp, Dublin, Ireland
[2] AISIA Res Lab, Ho Chi Minh City, Vietnam
[3] Univ Sci, Ho Chi Minh City, Vietnam
[4] Vietnam Natl Univ Ho Chi Minh City, Ho Chi Minh City, Vietnam
来源
NEW TRENDS IN INTELLIGENT SOFTWARE METHODOLOGIES, TOOLS AND TECHNIQUES | 2021年 / 337卷
基金
爱尔兰科学基金会;
关键词
scene graphs; graph embedding; image retrieval;
D O I
10.3233/FAIA210049
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Conventional approaches to image-text retrieval mainly focus on indexing visual objects appearing in pictures but ignore the interactions between these objects. Such objects occurrences and interactions are equivalently useful and important in this field as they are usually mentioned in the text. Scene graph presentation is a suitable method for the image-text matching challenge and obtained good results due to its ability to capture the inter-relationship information. Both images and text are represented in scene graph levels and formulate the retrieval challenge as a scene graph matching challenge. In this paper, we introduce the Local and Global Scene Graph Matching (LGSGM) model that enhances the state-of-the-art method by integrating an extra graph convolution network to capture the general information of a graph. Specifically, for a pair of scene graphs of an image and its caption, two separate models are used to learn the features of each graph's nodes and edges. Then a Siamese-structure graph convolution model is employed to embed graphs into vector forms. We finally combine the graph-level and the vector-level to calculate the similarity of this image-text pair. The empirical experiments show that our enhancement with the combination of levels can improve the performance of the baseline method by increasing the recall by more than 10% on the Flickr30k dataset. Our implementation code can be found at https://github.com/m2man/LGSGM.
引用
收藏
页码:510 / 523
页数:14
相关论文
共 34 条
  • [1] SPICE: Semantic Propositional Image Caption Evaluation
    Anderson, Peter
    Fernando, Basura
    Johnson, Mark
    Gould, Stephen
    [J]. COMPUTER VISION - ECCV 2016, PT V, 2016, 9909 : 382 - 398
  • [2] Bai YS, 2019, Arxiv, DOI arXiv:1904.01098
  • [3] Faghri F, 2018, Arxiv, DOI [arXiv:1707.05612, DOI 10.48550/ARXIV.1707.05612]
  • [4] Graves A, 2012, STUD COMPUT INTELL, V385, P1, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]
  • [5] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778
  • [6] Learning Semantic Concepts and Order for Image and Sentence Matching
    Huang, Yan
    Wu, Qi
    Song, Chunfeng
    Wang, Liang
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 6163 - 6171
  • [7] IMRAM: Iterative Matching with Recurrent Attention Memory for Cross-Modal Image-Text Retrieval
    Chen, Hui
    Ding, Guiguang
    Liu, Xudong
    Lin, Zijia
    Liu, Ji
    Han, Jungong
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, : 12652 - 12660
  • [8] Image Generation from Scene Graphs
    Johnson, Justin
    Gupta, Agrim
    Li Fei-Fei
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1219 - 1228
  • [9] Johnson J, 2015, PROC CVPR IEEE, P3668, DOI 10.1109/CVPR.2015.7298990
  • [10] Johnson M., 2017, Trans. Assoc. Comput. Linguist, V5, P339, DOI 10.1162/tacla00065