A multimodal prototypical approach for unsupervised sound classification

被引:1
作者
Kushwaha, Saksham Singh [1 ,2 ]
Fuentes, Magdalena [2 ,3 ]
机构
[1] NYU, Courant Inst Math Sci, New York, NY 10003 USA
[2] NYU, MARL, New York, NY 10003 USA
[3] NYU, IDM, New York, NY 10003 USA
来源
INTERSPEECH 2023 | 2023年
关键词
zero-shot prototypical learning; text-to-audio retrieval; environmental sound classification; sound recognition;
D O I
10.21437/Interspeech.2023-982
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
In the context of environmental sound classification, the adaptability of systems is key: which sound classes are interesting depends on the context and the user's needs. Recent advances in text-to-audio retrieval allow for zero-shot audio classification, but performance compared to supervised models remains limited. This work proposes a multimodal prototypical approach that exploits local audio-text embeddings to provide more relevant answers to audio queries, augmenting the adaptability of sound detection in the wild. We do this by first using text to query a nearby community of audio embeddings that best characterize each query sound, and select the group's centroids as our prototypes. Second, we compare unseen audio to these prototypes for classification. We perform multiple ablation studies to understand the impact of the embedding models and prompts. Our unsupervised approach improves upon the zero-shot state-of-the-art in three sound recognition benchmarks by an average of 12%.
引用
收藏
页码:266 / 270
页数:5
相关论文
共 23 条
  • [1] Objects that Sound
    Arandjelovic, Relja
    Zisserman, Andrew
    [J]. COMPUTER VISION - ECCV 2018, PT I, 2018, 11205 : 451 - 466
  • [2] SONYC : A System for Monitoring, Analyzing, and Mitigating Urban Noise Pollution
    Bello, Juan P.
    Silva, Claudio
    Nov, Oded
    Dubois, R. Luke
    Arora, Anish
    Salamon, Justin
    Mydlarz, Charles
    Doraiswamy, Harish
    [J]. COMMUNICATIONS OF THE ACM, 2019, 62 (02) : 68 - 77
  • [3] Cartwright M, 2019, IEEE WORK APPL SIG, P278, DOI [10.1109/waspaa.2019.8937265, 10.1109/WASPAA.2019.8937265]
  • [4] Chen S, 2022, arXiv
  • [5] Cramer J, 2019, INT CONF ACOUST SPEE, P3852, DOI 10.1109/ICASSP.2019.8682475
  • [6] Elizalde B, 2022, Arxiv, DOI [arXiv:2206.04769, 10.48550/arXiv.2206.04769, DOI 10.48550/ARXIV.2206.04769]
  • [7] Fonseca E., 2017, P 18 ISMIR C INT SOC, P486, DOI DOI 10.5281/ZENODO.1417159
  • [8] FSD50K: An Open Dataset of Human-Labeled Sound Events
    Fonseca, Eduardo
    Favory, Xavier
    Pons, Jordi
    Font, Frederic
    Serra, Xavier
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 829 - 852
  • [9] Auditory Scene Understanding for Autonomous Driving
    Furletov, Yury
    Willert, Volker
    Adamy, Juergen
    [J]. 2021 32ND IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2021, : 697 - 702
  • [10] AUDIOCLIP: EXTENDING CLIP TO IMAGE, TEXT AND AUDIO
    Guzhov, Andrey
    Raue, Federico
    Hees, Joern
    Dengel, Andreas
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 976 - 980