Pro2SAM: Mask Prompt to SAM with Grid Points for Weakly Supervised Object Localization

被引:0
作者
Yang, Xi [1 ]
Duan, Songsong [1 ]
Wang, Nannan [1 ]
Gao, Xinbo [2 ]
机构
[1] Xidian Univ, Xian, Peoples R China
[2] Chongqing Univ Posts & Telecommun, Chongqing, Peoples R China
来源
COMPUTER VISION - ECCV 2024, PT LXIX | 2025年 / 15127卷
基金
中国国家自然科学基金;
关键词
Weakly Supervised Object Localization; Segment Anything Model; Global Token; Mask Prompt;
D O I
10.1007/978-3-031-72890-7_24
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Weakly Supervised Object Localization (WSOL), which aims to localize objects by only using image-level labels, has attracted much attention because of its low annotation cost in real applications. Current studies focus on the Class Activation Map (CAM) of CNN and the self-attention map of transformer to identify the region of objects. However, both CAM and self-attention maps can not learn pixel-level fine-grained information on the foreground objects, which hinders the further advance of WSOL. To address this problem, we initiatively leverage the capability of zero-shot generalization and fine-grained segmentation in Segment Anything Model (SAM) to boost the activation of integral object regions. Further, to alleviate the semantic ambiguity issue accrued in single point prompt-based SAM, we propose an innovative mask prompt to SAM (Pro2SAM) network with grid points for WSOL task. First, we devise a Global Token Transformer (GTFormer) to generate a coarse-grained foreground map as a flexible mask prompt, where the GTFormer jointly embeds patch tokens and novel global tokens to learn foreground semantics. Secondly, we deliver grid points as dense prompts into SAM to maximize the probability of foreground mask, which avoids the lack of objects caused by a single point/box prompt. Finally, we propose a pixel-level similarity metric to come true the mask matching from mask prompt to SAM, where the mask with the highest score is viewed as the final localization map. Experiments show that the proposed Pro2SAM achieves state-of-the-art performance on both CUB-200-2011 and ILSVRC, with 84.03% and 66.85% Top-1 Loc, respectively.
引用
收藏
页码:387 / 403
页数:17
相关论文
共 50 条
[1]   Weakly Supervised Object Localization via Transformer with Implicit Spatial Calibration [J].
Bai, Haotian ;
Zhang, Ruimao ;
Wang, Jiong ;
Wan, Xiang .
COMPUTER VISION, ECCV 2022, PT IX, 2022, 13669 :612-628
[2]  
Brown TB, 2020, ADV NEUR IN, V33
[3]   LocLoc: Low-level Cues and Local-area Guides forWeakly Supervised Object Localization [J].
Cao, Xinzi ;
Zheng, Xiawu ;
Shen, Yunhang ;
Li, Ke ;
Chen, Jie ;
Lu, Yutong ;
Tian, Yonghong .
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, :5655-5664
[4]  
Cen JZ, 2024, Arxiv, DOI arXiv:2304.12308
[5]   Category-aware Allocation Transformer for Weakly Supervised Object Localization [J].
Chen, Zhiwei ;
Ding, Jinren ;
Cao, Liujuan ;
Shen, Yunhang ;
Zhang, Shengchuan ;
Jiang, Guannan ;
Ji, Rongrong .
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, :6620-6629
[6]  
Chen ZW, 2022, AAAI CONF ARTIF INTE, P410
[7]   Evaluating Weakly Supervised Object Localization Methods Right [J].
Choe, Junsuk ;
Oh, Seong Joon ;
Lee, Seungho ;
Chun, Sanghyuk ;
Akata, Zeynep ;
Shim, Hyunjung .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :3130-3139
[8]   Attention-based Dropout Layer for Weakly Supervised Object Localization [J].
Choe, Junsuk ;
Shim, Hyunjung .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :2214-2223
[9]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[10]  
Dosovitskiy A., An image, Patent No. [16x16, 1616]