Prompting Segmentation with Sound Is Generalizable Audio-Visual Source Localizer

被引:0
作者
Wang, Yaoting [1 ]
Liu, Weisong [2 ]
Li, Guangyao [1 ]
Ding, Jian [3 ]
Hu, Di [1 ]
Li, Xi [4 ]
机构
[1] Renmin Univ China, Gaoling Sch Artificial Intelligence, Beijing, Peoples R China
[2] Northwest Polytech Univ, Sch Comp Sci, Xian, Peoples R China
[3] Wuhan Univ, LIESMARS, Wuhan, Peoples R China
[4] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou, Peoples R China
来源
THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6 | 2024年
基金
中国国家自然科学基金; 美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Never having seen an object and heard its sound simultaneously, can the model still accurately localize its visual position from the input audio? In this work, we concentrate on the Audio-Visual Localization and Segmentation tasks but under the demanding zero-shot and few-shot scenarios. To achieve this goal, different from existing approaches that mostly employ the encoder-fusion-decoder paradigm to decode localization information from the fused audio-visual feature, we introduce the encoder-prompt-decoder paradigm, aiming to better fit the data scarcity and varying data distribution dilemmas with the help of abundant knowledge from pre-trained models. Specifically, we first propose to construct a Semantic-aware Audio Prompt (SAP) to help the visual foundation model focus on sounding objects, meanwhile, the semantic gap between the visual and audio modalities is also encouraged to shrink. Then, we develop a Correlation Adapter (ColA) to keep minimal training efforts as well as maintain adequate knowledge of the visual foundation model. By equipping with these means, extensive experiments demonstrate that this new paradigm outperforms other fusion-based methods in both the unseen class and cross-dataset settings. We hope that our work can further promote the generalization study of Audio-Visual Localization and Segmentation in practical application scenarios. Project page: https://github.com/GeWu-Lab/Generalizable-Audio-Visual-Segmentation
引用
收藏
页码:5669 / 5677
页数:9
相关论文
共 36 条
[1]   Objects that Sound [J].
Arandjelovic, Relja ;
Zisserman, Andrew .
COMPUTER VISION - ECCV 2018, PT I, 2018, 11205 :451-466
[2]   Multimodal Machine Learning: A Survey and Taxonomy [J].
Baltrusaitis, Tadas ;
Ahuja, Chaitanya ;
Morency, Louis-Philippe .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (02) :423-443
[3]  
Brown TB, 2020, ADV NEUR IN, V33
[4]  
Chen HL, 2020, Arxiv, DOI arXiv:2004.14368
[5]   Localizing Visual Sounds the Hard Way [J].
Chen, Honglie ;
Xie, Weidi ;
Afouras, Triantafyllos ;
Nagrani, Arsha ;
Vedaldi, Andrea ;
Zisserman, Andrew .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :16862-16871
[6]  
Dosovitskiy A, 2021, Arxiv, DOI arXiv:2010.11929
[7]  
Gao SY, 2023, Arxiv, DOI arXiv:2307.01146
[8]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[9]  
Hershey S, 2017, INT CONF ACOUST SPEE, P131, DOI 10.1109/ICASSP.2017.7952132
[10]  
Houlsby N, 2019, PR MACH LEARN RES, V97