Distillation and Supplementation of Features for Referring Image Segmentation

被引:0
作者
Tan, Zeyu [1 ]
Xu, Dahong [1 ]
Li, Xi [1 ]
Liu, Hong [1 ]
机构
[1] Hunan Normal Univ, Coll Informat Sci & Engn, Changsha 410081, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Training; Image segmentation; Visualization; Feature extraction; Linguistics; Image reconstruction; Decoding; Filtering; Transformers; Accuracy; Referring image segmentation; multi-modal task; vision-language understanding;
D O I
10.1109/ACCESS.2024.3482108
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Referring Image Segmentation (RIS) aims to accurately match specific instance objects in an input image with natural language expressions and generate corresponding pixel-level segmentation masks. Existing methods typically obtain multi-modal features by fusing linguistic features with visual features, which are fed into a mask decoder to generate segmentation masks. However, these methods ignore interfering noise in the multi-modal features that will adversely affect the generation of the target segmentation masks. In addition, the vast majority of current RIS models incorporate only a residual structure derived from a block within the Transformer model. The limitations of this information propagation approach hinder the stratification of the model structure, consequently affecting the training efficacy of the model. In this paper, we propose a RIS method called DSFRIS, which combines the knowledge of sparse reconstruction and employs a novel training mechanism in the process of training the decoder. Specifically, we propose a feature distillation mechanism for the multi-modal feature fusion stage and a feature supplementation mechanism for the mask decoder training process, which are two novel mechanisms for reducing the noise information in the multi-modal fusion features and enriching the feature information in the decoder training process, respectively. Through extensive experiments on three widely used RIS benchmark datasets, we demonstrate the state-of-the-art performance of our proposed method.
引用
收藏
页码:171269 / 171279
页数:11
相关论文
共 39 条
[1]   YOLACT Real-time Instance Segmentation [J].
Bolya, Daniel ;
Zhou, Chong ;
Xiao, Fanyi ;
Lee, Yong Jae .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9156-9165
[2]   Language-Based Image Editing with Recurrent Attentive Models [J].
Chen, Jianbo ;
Shen, Yelong ;
Gao, Jianfeng ;
Liu, Jingjing ;
Liu, Xiaodong .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8721-8729
[3]   Instance-aware Semantic Segmentation via Multi-task Network Cascades [J].
Dai, Jifeng ;
He, Kaiming ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3150-3158
[4]   Vision-Language Transformer and Query Generation for Referring Segmentation [J].
Ding, Henghui ;
Liu, Chang ;
Wang, Suchen ;
Jiang, Xudong .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :16301-16310
[5]   Encoder Fusion Network with Co-Attention Embedding for Referring Image Segmentation [J].
Feng, Guang ;
Hu, Zhiwei ;
Zhang, Lihe ;
Lu, Huchuan .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :15501-15510
[6]   Dual Attention Network for Scene Segmentation [J].
Fu, Jun ;
Liu, Jing ;
Tian, Haijie ;
Li, Yong ;
Bao, Yongjun ;
Fang, Zhiwei ;
Lu, Hanqing .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3141-3149
[7]   A Facial Landmark Detection Method Based on Deep Knowledge Transfer [J].
Gao, Pengcheng ;
Lu, Ke ;
Xue, Jian ;
Lyu, Jiayi ;
Shao, Ling .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (03) :1342-1353
[8]  
He KM, 2017, IEEE I CONF COMP VIS, P2980, DOI [10.1109/TPAMI.2018.2844175, 10.1109/ICCV.2017.322]
[9]   Segmentation from Natural Language Expressions [J].
Hu, Ronghang ;
Rohrbach, Marcus ;
Darrell, Trevor .
COMPUTER VISION - ECCV 2016, PT I, 2016, 9905 :108-124
[10]  
Huang SF, 2020, PROC CVPR IEEE, P10485, DOI 10.1109/CVPR42600.2020.01050