Cross-modality Multiple Relations Learning for Knowledge-based Visual Question Answering

被引:7
作者
Wang, Yan [1 ,2 ]
Li, Peize [3 ]
Si, Qingyi [4 ,5 ]
Zhang, Hanwen [4 ,5 ]
Zang, Wenyu [6 ]
Lin, Zheng [4 ,5 ]
Fu, Peng [4 ,5 ]
机构
[1] Jilin Univ, Coll Comp Sci & Technol, Sch Artificial Intelligence, Changchun 130012, Peoples R China
[2] Jilin Univ, Coll Comp Sci & Technol, Minist Educ, Key Lab Symbol Comp & Knowledge Engn, Changchun 130012, Peoples R China
[3] Jilin Univ, Sch Artificial Intelligence, Changchun 130012, Peoples R China
[4] Chinese Acad Sci, Inst Informat Engn, Beijing 100049, Peoples R China
[5] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing 100049, Peoples R China
[6] China Elect Corp, Beijing 100846, Peoples R China
基金
中国国家自然科学基金;
关键词
Cross-modality relation; external knowledge; visual question answering;
D O I
10.1145/3618301
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Knowledge-based visual question answering not only needs to answer the questions based on images but also incorporates external knowledge to study reasoning in the joint space of vision and language. To bridge the gap between visual content and semantic cues, it is important to capture the question-related and semantics-rich vision-language connections. Most existing solutions model simple intra-modality relation or represent cross-modality relation using a single vector, which makes it difficult to effectively model complex connections between visual features and question features. Thus, we propose a cross-modality multiple relations learning model, aiming to better enrich cross-modality representations and construct advanced multi-modality knowledge triplets. First, we design a simple yet effective method to generate multiple relations that represent the rich cross-modality relations. The various cross-modality relations link the textual question to the related visual objects. These multi-modality triplets efficiently align the visual objects and corresponding textual answers. Second, to encourage multiple relations to better align with different semantic relations, we further formulate a novel global-local loss. The global loss enables the visual objects and corresponding textual answers close to each other through cross-modality relations in the vision-language space, and the local loss better preserves semantic diversity among multiple relations. Experimental results on the Outside Knowledge VQA and Knowledge-Routed Visual Question Reasoning datasets demonstrate that our model outperforms the state-of-the-art methods.
引用
收藏
页数:22
相关论文
共 53 条
[51]   Knowledge is Power: Hierarchical-Knowledge Embedded Meta-Learning for Visual Reasoning in Artistic Domains [J].
Zheng, Wenbo ;
Yan, Lan ;
Gou, Chao ;
Wang, Fei-Yue .
KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, :2360-2368
[52]   KM4: Visual reasoning via Knowledge EmbeddingMemoryModel with MutualModulation [J].
Zheng, Wenbo ;
Yan, Lan ;
Gou, Chao ;
Wang, Fei-Yue .
INFORMATION FUSION, 2021, 67 :14-28
[53]  
Zhu ZH, 2020, PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P1097