Attentive Excitation and Aggregation for Bilingual Referring Image Segmentation

被引:4
作者
Zhou, Qianli [1 ]
Hui, Tianrui [2 ,5 ]
Wang, Rong [1 ]
Hu, Haimiao [3 ]
Liu, Si [4 ]
机构
[1] Peoples Publ Secur Univ China, 1 Muxidi Nanli, Beijing, Peoples R China
[2] Chinese Acad Sci, Inst Informat Engn, 89 Minzhuang Rd, Beijing, Peoples R China
[3] Beihang Univ, 37 Xueyuan Rd, Beijing, Peoples R China
[4] Beihang Univ, Inst Artificial Intelligence, 37 Xueyuan Rd, Beijing, Peoples R China
[5] Univ Chinese Acad Sci, Sch Cyber Secur, 19 Yuquan Rd, Beijing, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Bilingual referring segmentation; channel excitation; spatial aggregation;
D O I
10.1145/3446345
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The goal of referring image segmentation is to identify the object matched with an input natural language expression. Previous methods only support English descriptions, whereas Chinese is also broadly used around theworld, which limits the potential application of this task. Therefore, we propose to extend existing datasets with Chinese descriptions and preprocessing tools for training and evaluating bilingual referring segmentation models. In addition, previous methods also lack the ability to collaboratively learn channel-wise and spatial-wise cross-modal attention to well align visual and linguistic modalities. To tackle these limitations, we propose a Linguistic Excitation module to excite image channels guided by language information and a Linguistic Aggregation module to aggregate multimodal information based on image-language relationships. Since different levels of features from the visual backbone encode rich visual information, we also propose a Cross-Level Attentive Fusion module to fuse multilevel features gated by language information. Extensive experiments on four English and Chinese benchmarks show that our bilingual referring image segmentation model outperforms previous methods.
引用
收藏
页数:17
相关论文
empty
未找到相关数据