TRANSFERABLE MODELS FOR BIOACOUSTICS WITH HUMAN LANGUAGE SUPERVISION

被引:1
作者
Robinson, David
Robinson, Adelaide [1 ]
Akrapongpisak, Lily [2 ]
机构
[1] Univ Calif Santa Barbara, Santa Barbara, CA 93106 USA
[2] Univ Queensland, Brisbane, Qld, Australia
来源
2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024 | 2024年
关键词
Self-supervised Bioacoustics; Contrastive Language-Audio Pretraining; Passive Acoustic Monitoring;
D O I
10.1109/ICASSP48485.2024.10447250
中图分类号
学科分类号
摘要
Passive acoustic monitoring offers a scalable, non-invasive method for tracking global biodiversity and anthropogenic impacts on species. Although deep learning has become a vital tool for processing this data, current models are inflexible, typically cover only a handful of species, and are limited by data scarcity. In this work, we propose BioLingual, a new model for bioacoustics based on contrastive language-audio pretraining. We first aggregate bioacoustic archives into a language-audio dataset, called AnimalSpeak, with over a million audio-caption pairs holding information on species, vocalization context, and animal behavior. After training on this dataset to connect language and audio representations, our model can identify over a thousand species' calls across taxa, complete bioacoustic tasks zero-shot, and retrieve animal vocalization recordings from natural text queries. When fine-tuned, BioLingual sets a new state-of-the-art on nine tasks in the Benchmark of Animal Sounds. Given its broad taxa coverage and ability to be flexibly queried in human language, we believe this model opens new paradigms in ecological monitoring and research, including free-text search on acoustic monitoring archives. We release our models, dataset, and code.(1)
引用
收藏
页码:1316 / 1320
页数:5
相关论文
共 43 条
  • [1] Bermant Peter C., 2022, BIORXIV
  • [2] Bradbury Jack W., 1998, pi
  • [3] Brown T., 2020, ADV NEURAL INFORM PR, V33, P1877, DOI DOI 10.48550/ARXIV.2005.14165
  • [4] HTS-AT: A HIERARCHICAL TOKEN-SEMANTIC AUDIO TRANSFORMER FOR SOUND CLASSIFICATION AND DETECTION
    Chen, Ke
    Du, Xingjian
    Zhu, Bilei
    Ma, Zejun
    Berg-Kirkpatrick, Taylor
    Dubnov, Shlomo
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 646 - 650
  • [5] An annotated set of audio recordings of Eastern North American birds containing frequency, time, and species information
    Chronister, Lauren M.
    Rhinehart, Tessa A.
    Place, Aidan
    Kitzes, Justin
    [J]. ECOLOGY, 2021, 102 (06)
  • [6] Autonomous sound recording outperforms human observation for sampling birds: a systematic map and user guide
    Darras, Kevin
    Batary, Peter
    Furnas, Brett J.
    Grass, Ingo
    Mulyani, Yeni A.
    Tscharntke, Teja
    [J]. ECOLOGICAL APPLICATIONS, 2019, 29 (06)
  • [7] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [8] Automated detection of Hainan gibbon calls for passive acoustic monitoring
    Dufourq, Emmanuel
    Durbach, Ian
    Hansford, James P.
    Hoepfner, Amanda
    Ma, Heidi
    Bryant, Jessica, V
    Stender, Christina S.
    Li, Wenyong
    Liu, Zhiwei
    Chen, Qing
    Zhou, Zhaoli
    Turvey, Samuel T.
    [J]. REMOTE SENSING IN ECOLOGY AND CONSERVATION, 2021, 7 (03) : 475 - 487
  • [9] Elizalde Benjamin, 2023, ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), P1, DOI 10.1109/ICASSP49357.2023.10095889
  • [10] GBIF.org, 2023, GBIF OCC DOWNL, DOI [DOI 10.15468/DL.W64NHG, 10.15468/dl.w64nhg]