Distillation and Supplementation of Features for Referring Image Segmentation

被引:0
作者
Tan, Zeyu [1 ]
Xu, Dahong [1 ]
Li, Xi [1 ]
Liu, Hong [1 ]
机构
[1] Hunan Normal Univ, Coll Informat Sci & Engn, Changsha 410081, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Training; Image segmentation; Visualization; Feature extraction; Linguistics; Image reconstruction; Decoding; Filtering; Transformers; Accuracy; Referring image segmentation; multi-modal task; vision-language understanding;
D O I
10.1109/ACCESS.2024.3482108
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Referring Image Segmentation (RIS) aims to accurately match specific instance objects in an input image with natural language expressions and generate corresponding pixel-level segmentation masks. Existing methods typically obtain multi-modal features by fusing linguistic features with visual features, which are fed into a mask decoder to generate segmentation masks. However, these methods ignore interfering noise in the multi-modal features that will adversely affect the generation of the target segmentation masks. In addition, the vast majority of current RIS models incorporate only a residual structure derived from a block within the Transformer model. The limitations of this information propagation approach hinder the stratification of the model structure, consequently affecting the training efficacy of the model. In this paper, we propose a RIS method called DSFRIS, which combines the knowledge of sparse reconstruction and employs a novel training mechanism in the process of training the decoder. Specifically, we propose a feature distillation mechanism for the multi-modal feature fusion stage and a feature supplementation mechanism for the mask decoder training process, which are two novel mechanisms for reducing the noise information in the multi-modal fusion features and enriching the feature information in the decoder training process, respectively. Through extensive experiments on three widely used RIS benchmark datasets, we demonstrate the state-of-the-art performance of our proposed method.
引用
收藏
页码:171269 / 171279
页数:11
相关论文
共 39 条
[21]   Focal Loss for Dense Object Detection [J].
Lin, Tsung-Yi ;
Goyal, Priya ;
Girshick, Ross ;
He, Kaiming ;
Dollar, Piotr .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :2999-3007
[22]   Microsoft COCO: Common Objects in Context [J].
Lin, Tsung-Yi ;
Maire, Michael ;
Belongie, Serge ;
Hays, James ;
Perona, Pietro ;
Ramanan, Deva ;
Dollar, Piotr ;
Zitnick, C. Lawrence .
COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 :740-755
[23]   Recurrent Multimodal Interaction for Referring Image Segmentation [J].
Liu, Chenxi ;
Lin, Zhe ;
Shen, Xiaohui ;
Yang, Jimei ;
Lu, Xin ;
Yuille, Alan .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :1280-1289
[24]   PolyFormer: Referring Image Segmentation as Sequential Polygon Generation [J].
Liu, Jiang ;
Ding, Hui ;
Cai, Zhaowei ;
Zhang, Yuting ;
Satzoda, Ravi Kumar ;
Mahadevan, Vijay ;
Manmatha, R. .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :18653-18663
[25]   Path Aggregation Network for Instance Segmentation [J].
Liu, Shu ;
Qi, Lu ;
Qin, Haifang ;
Shi, Jianping ;
Jia, Jiaya .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8759-8768
[26]   12-in-1: Multi-Task Vision and Language Representation Learning [J].
Lu, Jiasen ;
Goswami, Vedanuj ;
Rohrbach, Marcus ;
Parikh, Devi ;
Lee, Stefan .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :10434-10443
[27]   V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation [J].
Milletari, Fausto ;
Navab, Nassir ;
Ahmadi, Seyed-Ahmad .
PROCEEDINGS OF 2016 FOURTH INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2016, :565-571
[28]   Modeling Context Between Objects for Referring Expression Understanding [J].
Nagaraja, Varun K. ;
Morariu, Vlad I. ;
Davis, Larry S. .
COMPUTER VISION - ECCV 2016, PT IV, 2016, 9908 :792-807
[29]  
Radford A, 2021, PR MACH LEARN RES, V139
[30]  
Touvron H, 2021, PR MACH LEARN RES, V139, P7358