Cross-modality Multiple Relations Learning for Knowledge-based Visual Question Answering

被引:1
作者
Wang, Yan [1 ,2 ]
Li, Peize [3 ]
Si, Qingyi [4 ,5 ]
Zhang, Hanwen [4 ,5 ]
Zang, Wenyu [6 ]
Lin, Zheng [4 ,5 ]
Fu, Peng [4 ,5 ]
机构
[1] Jilin Univ, Coll Comp Sci & Technol, Sch Artificial Intelligence, Changchun 130012, Peoples R China
[2] Jilin Univ, Coll Comp Sci & Technol, Minist Educ, Key Lab Symbol Comp & Knowledge Engn, Changchun 130012, Peoples R China
[3] Jilin Univ, Sch Artificial Intelligence, Changchun 130012, Peoples R China
[4] Chinese Acad Sci, Inst Informat Engn, Beijing 100049, Peoples R China
[5] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing 100049, Peoples R China
[6] China Elect Corp, Beijing 100846, Peoples R China
基金
中国国家自然科学基金;
关键词
Cross-modality relation; external knowledge; visual question answering;
D O I
10.1145/3618301
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Knowledge-based visual question answering not only needs to answer the questions based on images but also incorporates external knowledge to study reasoning in the joint space of vision and language. To bridge the gap between visual content and semantic cues, it is important to capture the question-related and semantics-rich vision-language connections. Most existing solutions model simple intra-modality relation or represent cross-modality relation using a single vector, which makes it difficult to effectively model complex connections between visual features and question features. Thus, we propose a cross-modality multiple relations learning model, aiming to better enrich cross-modality representations and construct advanced multi-modality knowledge triplets. First, we design a simple yet effective method to generate multiple relations that represent the rich cross-modality relations. The various cross-modality relations link the textual question to the related visual objects. These multi-modality triplets efficiently align the visual objects and corresponding textual answers. Second, to encourage multiple relations to better align with different semantic relations, we further formulate a novel global-local loss. The global loss enables the visual objects and corresponding textual answers close to each other through cross-modality relations in the vision-language space, and the local loss better preserves semantic diversity among multiple relations. Experimental results on the Outside Knowledge VQA and Knowledge-Routed Visual Question Reasoning datasets demonstrate that our model outperforms the state-of-the-art methods.
引用
收藏
页数:22
相关论文
共 53 条
[1]   Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering [J].
Anderson, Peter ;
He, Xiaodong ;
Buehler, Chris ;
Teney, Damien ;
Johnson, Mark ;
Gould, Stephen ;
Zhang, Lei .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6077-6086
[2]  
[Anonymous], 2017, WIKIPEDIA FREE E
[3]   DBpedia: A nucleus for a web of open data [J].
Auer, Soeren ;
Bizer, Christian ;
Kobilarov, Georgi ;
Lehmann, Jens ;
Cyganiak, Richard ;
Ives, Zachary .
SEMANTIC WEB, PROCEEDINGS, 2007, 4825 :722-+
[4]  
Ben-Younes H, 2019, AAAI CONF ARTIF INTE, P8102
[5]   MUTAN: Multimodal Tucker Fusion for Visual Question Answering [J].
Ben-younes, Hedi ;
Cadene, Remi ;
Cord, Matthieu ;
Thome, Nicolas .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :2631-2639
[6]   MUREL: Multimodal Relational Reasoning for Visual Question Answering [J].
Cadene, Remi ;
Ben-younes, Hedi ;
Cord, Matthieu ;
Thome, Nicolas .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :1989-1998
[7]   Knowledge-Routed Visual Question Reasoning: Challenges for Deep Representation Embedding [J].
Cao, Qingxing ;
Li, Bailin ;
Liang, Xiaodan ;
Wang, Keze ;
Lin, Liang .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (07) :2758-2767
[8]   MuKEA: Multimodal Knowledge Extraction and Accumulation for Knowledge-based Visual Question Answering [J].
Ding, Yang ;
Yu, Jing ;
Liu, Bang ;
Hu, Yue ;
Cui, Mingxin ;
Wu, Qi .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :5079-5088
[9]   Recurrent Attention Network with Reinforced Generator for Visual Dialog [J].
Fan, Hehe ;
Zhu, Linchao ;
Yang, Yi ;
Wu, Fei .
ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2020, 16 (03)
[10]  
Gardères F, 2020, FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, P489