Contrastive Latent Space Reconstruction Learning for Audio-Text Retrieval

被引:2
作者
Luo, Kaiyi [1 ,2 ]
Zhang, Xulong [1 ]
Wang, Jianzong [1 ]
Li, Huaxiong [2 ]
Cheng, Ning [1 ]
Xiao, Jing [1 ]
机构
[1] Ping An Technol Shenzhen Co Ltd, Shenzhen, Peoples R China
[2] Nanjing Univ, Dept Control Sci & Intelligent Engn, Nanjing, Peoples R China
来源
2023 IEEE 35TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI | 2023年
关键词
Cross-modal Retrieval; Data Reconstruction; Contrastive Learning;
D O I
10.1109/ICTAI59109.2023.00137
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modal retrieval (CMR) has been extensively applied in various domains, such as multimedia search engines and recommendation systems. Most existing CMR methods focus on image-to-text retrieval, whereas audio-to-text retrieval, a less explored domain, has posed a great challenge due to the difficulty to uncover discriminative features from audio clips and texts. Existing studies are restricted in the following two ways: 1) Most researchers utilize contrastive learning to construct a common subspace where similarities among data can be measured. However, they considers only cross-modal transformation, neglecting the intra-modal separability. Besides, the temperature parameter is not adaptively adjusted along with semantic guidance, which degrades the performance. 2) These methods do not take latent representation reconstruction into account, which is essential for semantic alignment. This paper introduces a novel audio-text oriented CMR approach, termed Contrastive Latent Space Reconstruction Learning (CLSR). CLSR improves contrastive representation learning by taking intra-modal separability into account and adopting an adaptive temperature control strategy. Moreover, the latent representation reconstruction modules are embedded into the CMR framework, which improves modal interaction. Experiments in comparison with some state-of-the-art methods on two audio-text datasets have validated the superiority of CLSR.
引用
收藏
页码:913 / 917
页数:5
相关论文
共 35 条
[31]  
Xin Yifei, 2023, P ICASSP
[32]   Decoupled Contrastive Learning [J].
Yeh, Chun-Hsiao ;
Hong, Cheng-Yao ;
Hsu, Yen-Chi ;
Liu, Tyng-Luh ;
Chen, Yubei ;
LeCun, Yann .
COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 :668-684
[33]   MCCN: Multimodal Coordinated Clustering Network for Large-Scale Cross-modal Retrieval [J].
Zeng, Zhixiong ;
Sun, Ying ;
Mao, Wenji .
PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, :5427-5435
[34]   Aggregation-Based Graph Convolutional Hashing for Unsupervised Cross-Modal Retrieval [J].
Zhang, Peng-Fei ;
Li, Yang ;
Huang, Zi ;
Xu, Xin-Shun .
IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 :466-479
[35]  
Zhang X., 2023, P ADMA