NeuSyRE: Neuro-symbolic visual understanding and reasoning framework based on scene graph enrichment

被引:2
作者
Khan, M. Jaleed [1 ]
Breslin, John G. [1 ,2 ]
Curry, Edward [1 ,2 ]
机构
[1] Univ Galway, Data Sci Inst, SFI Ctr Res Training Artificial Intelligence, Galway, Ireland
[2] Univ Galway, Data Sci Inst, Insight SFI Res Ctr Data Analyt, Galway, Ireland
基金
爱尔兰科学基金会;
关键词
Scene graph; image representation; common sense knowledge; knowledge enrichment; visual reasoning; image captioning;
D O I
10.3233/SW-233510
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Exploring the potential of neuro-symbolic hybrid approaches offers promising avenues for seamless high-level understanding and reasoning about visual scenes. Scene Graph Generation (SGG) is a symbolic image representation approach based on deep neural networks (DNN) that involves predicting objects, their attributes, and pairwise visual relationships in images to create scene graphs, which are utilized in downstream visual reasoning. The crowdsourced training datasets used in SGG are highly imbalanced, which results in biased SGG results. The vast number of possible triplets makes it challenging to collect sufficient training samples for every visual concept or relationship. To address these challenges, we propose augmenting the typical data-driven SGG approach with common sense knowledge to enhance the expressiveness and autonomy of visual understanding and reasoning. We present a loosely-coupled neuro-symbolic visual understanding and reasoning framework that employs a DNN-based pipeline for object detection and multi-modal pairwise relationship prediction for scene graph generation and leverages common sense knowledge in heterogenous knowledge graphs to enrich scene graphs for improved downstream reasoning. A comprehensive evaluation is performed on multiple standard datasets, including Visual Genome and Microsoft COCO, in which the proposed approach outperformed the state-of-the-art SGG methods in terms of relationship recall scores, i.e. Recall@K and mean Recall@K, as well as the state-of-the-art scene graph-based image captioning methods in terms of SPICE and CIDEr scores with comparable BLEU, ROGUE and METEOR scores. As a result of enrichment, the qualitative results showed improved expressiveness of scene graphs, resulting in more intuitive and meaningful caption generation using scene graphs. Our results validate the effectiveness of enriching scene graphs with common sense knowledge using heterogeneous knowledge graphs. This work provides a baseline for future research in knowledge-enhanced visual understanding and reasoning. The source code is available at https://github.com/jaleedkhan/neusire.
引用
收藏
页码:1389 / 1413
页数:25
相关论文
共 118 条
[1]  
Allamanis M, 2017, PR MACH LEARN RES, V70
[2]   SPICE: Semantic Propositional Image Caption Evaluation [J].
Anderson, Peter ;
Fernando, Basura ;
Johnson, Mark ;
Gould, Stephen .
COMPUTER VISION - ECCV 2016, PT V, 2016, 9909 :382-398
[3]   Improving Visual Relationship Detection Using Semantic Modeling of Scene Descriptions [J].
Baier, Stephan ;
Ma, Yunpu ;
Tresp, Volker .
SEMANTIC WEB - ISWC 2017, PT I, 2017, 10587 :53-68
[4]  
Baker CollinF., 1998, P 17 INT C COMPUTATI, P86
[5]  
Banerjee S., 2005, P ACL WORKSHOP INTRI, P65, DOI DOI 10.3115/1626355.1626389
[6]  
Bennetot A, 2019, Arxiv, DOI [arXiv:1909.09065, DOI 10.48550/ARXIV.1909.09065]
[7]  
Bhat G., 2020, COMPUTER VISION ECCV, P23
[8]  
Bianchi F, 2020, STUD SEMANTIC WEB, V47, P49, DOI 10.3233/SSW200011
[9]  
Buffelli D, 2022, Arxiv, DOI arXiv:2209.02749
[10]  
Chang XJ, 2022, Arxiv, DOI [arXiv:2104.01111, 10.48550/arXiv.2104.01111, DOI 10.48550/ARXIV.2104.01111]