Diffusion Model is Secretly a Training-Free Open Vocabulary Semantic Segmenter

被引:0
作者
Wang, Jinglong [1 ]
Li, Xiawei [2 ]
Zhang, Jing [1 ]
Xu, Qingyuan [1 ]
Zhou, Qin [1 ]
Yu, Qian [1 ]
Sheng, Lu [1 ]
Xu, Dong [3 ]
机构
[1] Beihang Univ, Sch Software, Beijing 100191, Peoples R China
[2] Baidu, Business Res & Dev Dept, Beijing 100193, Peoples R China
[3] Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Diffusion models; Semantics; Training; Shape; Vocabulary; Data models; Noise reduction; Text to image; Data mining; Stable diffusion; open-vocabulary; semantic segmentation;
D O I
10.1109/TIP.2025.3551648
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The pre-trained text-image discriminative models, such as CLIP, has been explored for open-vocabulary semantic segmentation with unsatisfactory results due to the loss of crucial localization information and awareness of object shapes. Recently, there has been a growing interest in expanding the application of generative models from generation tasks to semantic segmentation. These approaches utilize generative models either for generating annotated data or extracting features to facilitate semantic segmentation. This typically involves generating a considerable amount of synthetic data or requiring additional mask annotations. To this end, we uncover the potential of generative text-to-image diffusion models (e.g., Stable Diffusion) as highly efficient open-vocabulary semantic segmenters, and introduce a novel training-free approach named DiffSegmenter. The insight is that to generate realistic objects that are semantically faithful to the input text, both the complete object shapes and the corresponding semantics are implicitly learned by diffusion models. We discover that the object shapes are characterized by the self-attention maps while the semantics are indicated through the cross-attention maps produced by the denoising U-Net, forming the basis of our segmentation results. Additionally, we carefully design effective textual prompts and a category filtering mechanism to further enhance the segmentation results. Extensive experiments on three benchmark datasets show that the proposed DiffSegmenter achieves impressive results for open-vocabulary semantic segmentation.
引用
收藏
页码:1895 / 1907
页数:13
相关论文
共 59 条
[41]  
Rassin R, 2024, Arxiv, DOI arXiv:2306.08877
[42]  
Ren P., 2023, P 11 INT C LEARN REP, P1
[43]   High-Resolution Image Synthesis with Latent Diffusion Models [J].
Rombach, Robin ;
Blattmann, Andreas ;
Lorenz, Dominik ;
Esser, Patrick ;
Ommer, Bjoern .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, :10674-10685
[44]  
Shin G, 2022, Arxiv, DOI arXiv:2206.07045
[45]  
Tang R, 2023, PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, P5644
[46]   Diffuse, Attend, and Segment: Unsupervised Zero-Shot Segmentation using Stable Diffusion [J].
Tian, Junjiao ;
Aggarwal, Lavisha ;
Colaco, Andrea ;
Kira, Zsolt ;
Gonzalez-Franco, Mar .
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2024, 2024, :3554-3563
[47]   Self-supervised Equivariant Attention Mechanism for Weakly Supervised Semantic Segmentation [J].
Wang, Yude ;
Zhang, Jie ;
Kan, Meina ;
Shan, Shiguang ;
Chen, Xilin .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :12272-12281
[48]  
Wu J., 2022, P MIDL, P1623
[49]   DiffuMask: Synthesizing Images with Pixel-level Annotations for Semantic Segmentation Using Diffusion Models [J].
Wu, Weijia ;
Zhao, Yuzhong ;
Shou, Mike Zheng ;
Zhou, Hong ;
Shen, Chunhua .
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, :1206-1217
[50]   CLIMS: Cross Language Image Matching for Weakly Supervised Semantic Segmentation [J].
Xie, Jinheng ;
Hou, Xianxu ;
Ye, Kai ;
Shen, Linlin .
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, :4473-4482