Hierarchical collaboration for referring image segmentation

被引:1
作者
Zhang, Wei [1 ,2 ]
Cheng, Zesen [3 ]
Chen, Jie [2 ,3 ]
Gao, Wen [1 ,2 ]
机构
[1] Harbin Inst Technol, Sch Comp Sci & Technol, Shenzhen 518055, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518000, Peoples R China
[3] Peking Univ, Sch Elect & Comp Engn, Shenzhen 518055, Peoples R China
基金
国家重点研发计划;
关键词
Referring image segmentation; Image understanding; Cross-modal; TRANSFORMER; QUERY;
D O I
10.1016/j.neucom.2024.128632
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the field of referring segmentation, top-down methods and bottom-up methods are the two prevailing approaches. Both of these methods inevitably exhibit certain drawbacks. Top-down methods are susceptible to Polar Negative (PN) errors due to their limited understanding of multi-modal fine-grained features. Bottom-up methods lack macro-level object positional information, making them susceptible to Inferior Positive (IP) errors. However, we find that the two approaches are highly complementary in addressing their respective weaknesses, but combining them directly through a simple average does not yield complementary advantages. Therefore, we proposed a hierarchical collaboration approach to explore the complementary characteristics of the existing two methods from the perspectives of fusion and interaction, aiming to achieve more precise segmentation results. We proposed the Complementary Feature Interaction (CFI) module, which enables top-down methods to access fine-grained information and allows bottom-up approaches to obtain object positional information interactively. Regarding integration, Gaussian Scoring Integration (GSI) models the Gaussian performance distributions of two branches and performs weighted integration by sampling confidence scores from these distributions. We integrate various top-down and bottom-up methods within the proposed architecture and conduct experiments on three standard datasets. The experimental results demonstrate that our method outperforms the state-of-theart independent segmentation algorithms. On the RefCOCO validation, test A and test B datasets, our proposed method achieved IoU scores of 77.51, 79.12, and 72.79, respectively. Extensive experiments demonstrate that our method can significantly improve segmentation accuracy when fusing different sub-methods.
引用
收藏
页数:13
相关论文
共 82 条
[1]  
[Anonymous], 2022, arXiv
[2]   SwipeCut: Interactive Segmentation via Seed Grouping [J].
Chen, Ding-Jie ;
Chen, Hwann-Tzong ;
Chang, Long-Wen .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (09) :2959-2970
[3]   See-Through-Text Grouping for Referring Image Segmentation [J].
Chen, Ding-Jie ;
Jia, Songhao ;
Lo, Yi-Chen ;
Chen, Hwann-Tzong ;
Liu, Tyng-Luh .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :7453-7462
[4]   Adversarial Learning of Object-Aware Activation Map for Weakly-Supervised Semantic Segmentation [J].
Chen, Junliang ;
Lu, Weizeng ;
Li, Yuexiang ;
Shen, Linlin ;
Duan, Jinming .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (08) :3935-3946
[5]  
Chen YW, 2019, Arxiv, DOI arXiv:1910.04748
[6]  
Cheng BW, 2022, Arxiv, DOI [arXiv:2112.01527, 10.48550/arXiv.2112.01527, DOI 10.48550/ARXIV.2112.01527]
[7]  
Cheng ZS, 2023, Arxiv, DOI arXiv:2306.10750
[8]   Cross-Aware Early Fusion With Stage-Divided Vision and Language Transformer Encoders for Referring Image Segmentation [J].
Cho, Yubin ;
Yu, Hyunwoo ;
Kang, Suk-Ju .
IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 :5823-5833
[9]   Vision-Language Transformer and Query Generation for Referring Segmentation [J].
Ding, Henghui ;
Liu, Chang ;
Wang, Suchen ;
Jiang, Xudong .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :16301-16310
[10]   HGFormer: Hierarchical Grouping Transformer for Domain Generalized Semantic Segmentation [J].
Ding, Jian ;
Xue, Nan ;
Xia, Gui-Song ;
Schiele, Bernt ;
Dai, Dengxin .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :15413-15423