Vision-Aware Language Reasoning for Referring Image Segmentation

被引:0
作者
Fayou Xu
Bing Luo
Chao Zhang
Li Xu
Mingxing Pu
Bo Li
机构
[1] Xihua University,School of Computer and Software Engineering
[2] Sichuan Police College,Key Laboratory of Intelligent Policing
[3] Xihua University,School of Science
来源
Neural Processing Letters | 2023年 / 55卷
关键词
Referring image segmentation; Vision and language; Explainable language-structure reasoning;
D O I
暂无
中图分类号
学科分类号
摘要
Referring image segmentation is a multimodal joint task that aims to segment linguistically indicated objects from images in paired expressions and images. However, the diversity of language annotations trends to result in semantic ambiguity, which makes the semantic representation of language feature encoding imprecise. Existing methods ignore the correction of language encoding module, so that the semantic error of language features cannot be improved in the subsequent process, resulting in semantic deviation. To this end, we propose a vision-aware language reasoning model. Intuitively, the segmentation result can be used to guide the reconstruction of language features, which could be expressed as a tree-structured recursive process. Specifically, we designed a language reasoning encoding module and a mask loopback optimization module to optimize the language encoding tree. The feature weights of tree nodes are learned through backpropagation. In order to overcome the problem that local language words and visual regions are easily introduced into noise regions in the traditional attention module, we use the global language prior information to calculate the importance of different words to further weight the visual region features, which could be embodied as language-aware vision attention module. Our experimental results on four benchmark datasets show that the proposed method achieves performance improvement.
引用
收藏
页码:11313 / 11331
页数:18
相关论文
共 39 条
[1]  
Zhou B(2018)Semantic understanding of scenes through the ade20k dataset Int J Comput Vis 127 302-321
[2]  
Zhao H(2022)Structured attention network for referring image segmentation IEEE Trans Multimed 24 1922-1932
[3]  
Puig X(1997)Long short-term memory Neural Comput 9 1735-1780
[4]  
Fidler S(2022)Learning to compose and reason with language tree structures for visual grounding IEEE Trans Pattern Anal Mach Intell 44 684-696
[5]  
Barriuso A(2021)Interpretable visual question answering by reasoning on dependency trees IEEE Trans Pattern Anal Mach Intell 43 887-901
[6]  
Torralba A(2010)The segmented and annotated IAPR TC-12 benchmark Comput Vis Image Underst 114 419-428
[7]  
Lin L(2009)The pascal visual object classes (VOC) challenge Int J Comput Vis 88 303-338
[8]  
Yan P(1997)Bidirectional recurrent neural networks IEEE Trans Signal Process 45 2673-2681
[9]  
Xu X(undefined)undefined undefined undefined undefined-undefined
[10]  
Yang S(undefined)undefined undefined undefined undefined-undefined