DiffATR: Diffusion-based Generative Modeling for Audio-Text Retrieval

被引:0
作者
Xin, Yifei [1 ]
Cheng, Xuxin [1 ]
Zhu, Zhihong [1 ]
Yang, Xusheng [1 ]
Zou, Yuexian [1 ]
机构
[1] Peking Univ, Sch ECE, Shenzhen, Peoples R China
来源
INTERSPEECH 2024 | 2024年
关键词
audio-text retrieval; diffusion model; joint probability distribution; out-of-domain retrieval;
D O I
10.21437/Interspeech.2024-405
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing audio-text retrieval (ATR) methods are essentially discriminative models that aim to maximize the conditional likelihood, represented as p(candidates|query). Nevertheless, this methodology fails to consider the intrinsic data distribution p(query), leading to difficulties in discerning out-of-distribution data. In this work, we attempt to tackle this constraint through a generative perspective and model the relationship between audio and text as their joint probability p(candidates, query). To this end, we present a diffusionbased ATR framework (DiffATR), which models ATR as an iterative procedure that progressively generates joint distribution from noise. Throughout its training phase, DiffATR is optimized from both generative and discriminative viewpoints: the generator is refined through a generation loss, while the feature extractor benefits from a contrastive loss, thus combining the merits of both methodologies. Experiments on the AudioCaps and Clotho datasets with superior performances, verify the effectiveness of our approach. Notably, without any alterations, our DiffATR consistently exhibits strong performance in out-of-domain retrieval settings.
引用
收藏
页码:1670 / 1674
页数:5
相关论文
empty
未找到相关数据