Cross-modal attention guided visual reasoning for referring image segmentation

被引:0
作者
Zhang, Wenjing [1 ]
Hu, Mengnan [1 ,2 ]
Tan, Quange [1 ]
Zhou, Qianli [1 ]
Wang, Rong [1 ,3 ]
机构
[1] Peoples Publ Secur Univ China, Sch Informat & Cyber Secur, Beijing 434020, Peoples R China
[2] Shandong Police Coll, Police Technol & Equipment Innovat Res Ctr, Jinan 250200, Peoples R China
[3] Minist Publ Secur, Key Lab Secur Prevent Technol & Risk Assessment, Beijing 434020, Peoples R China
基金
中国国家自然科学基金;
关键词
Referring image segmentation; Multi-scales features; Cross-modal attention mechanism; Graph convolution;
D O I
10.1007/s11042-023-14586-9
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The goal of referring image segmentation (RIS) is to generate the foreground mask of the object described by a natural language expression. The key of RIS is to learn the valid multimodal features between visual and linguistic modalities to identify the referred object accurately. In this paper, a cross-modal attention-guided visual reasoning model for referring segmentation is proposed. First, the multi-scale detailed information is captured by a pyramidal convolution module to enhance visual representation. Then, the entity words of the referring expression and relevant image regions are aligned by a cross-modal attention mechanism. Based on this, all the entities described by the expression can be identified. Finally, a fully connected multimodal graph is constructed with multimodal features and relationship cues of expressions. Visual reasoning is performed stepwisely on the graph to highlight the correct entity whiling suppressing other irrelevant ones. The experiment results on four benchmark datasets show that the proposed method achieves performance improvement (e.g., +1.13% on UNC, +3.06% on UNC+, +2.1% on G-Ref, and 1.11% on ReferIt). Also, the effectiveness and feasibility of each component of our method are verified by extensive ablation studies.
引用
收藏
页码:28853 / 28872
页数:20
相关论文
共 50 条
[1]   MUTAN: Multimodal Tucker Fusion for Visual Question Answering [J].
Ben-younes, Hedi ;
Cadene, Remi ;
Cord, Matthieu ;
Thome, Nicolas .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :2631-2639
[2]  
Burks A.W., 1954, MATH TABLES OTHER AI, V8, P53, DOI DOI 10.1090/S0025-5718-1954-0061484-4
[3]   Dense and Low-Rank Gaussian CRFs Using Deep Embeddings [J].
Chandra, Siddhartha ;
Usunier, Nicolas ;
Kokkinos, Iasonas .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :5113-5122
[4]   See-Through-Text Grouping for Referring Image Segmentation [J].
Chen, Ding-Jie ;
Jia, Songhao ;
Lo, Yi-Chen ;
Chen, Hwann-Tzong ;
Liu, Tyng-Luh .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :7453-7462
[5]  
Chen L-C, 2015, ARXIV
[6]   DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs [J].
Chen, Liang-Chieh ;
Papandreou, George ;
Kokkinos, Iasonas ;
Murphy, Kevin ;
Yuille, Alan L. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) :834-848
[7]  
Chen Yi-Wen, 2019, BMVC
[8]   Graph-Based Global Reasoning Networks [J].
Chen, Yunpeng ;
Rohrbach, Marcus ;
Yan, Zhicheng ;
Yan, Shuicheng ;
Feng, Jiashi ;
Kalantidis, Yannis .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :433-442
[9]   Visual Grounding via Accumulated Attention [J].
Deng, Chaorui ;
Wu, Qi ;
Wu, Qingyao ;
Hu, Fuyuan ;
Lyu, Fan ;
Tan, Mingkui .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :7746-7755
[10]  
Duta I. C., 2020, ARXIV