Contrastive Latent Space Reconstruction Learning for Audio-Text Retrieval

被引:2
作者
Luo, Kaiyi [1 ,2 ]
Zhang, Xulong [1 ]
Wang, Jianzong [1 ]
Li, Huaxiong [2 ]
Cheng, Ning [1 ]
Xiao, Jing [1 ]
机构
[1] Ping An Technol Shenzhen Co Ltd, Shenzhen, Peoples R China
[2] Nanjing Univ, Dept Control Sci & Intelligent Engn, Nanjing, Peoples R China
来源
2023 IEEE 35TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI | 2023年
关键词
Cross-modal Retrieval; Data Reconstruction; Contrastive Learning;
D O I
10.1109/ICTAI59109.2023.00137
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Cross-modal retrieval (CMR) has been extensively applied in various domains, such as multimedia search engines and recommendation systems. Most existing CMR methods focus on image-to-text retrieval, whereas audio-to-text retrieval, a less explored domain, has posed a great challenge due to the difficulty to uncover discriminative features from audio clips and texts. Existing studies are restricted in the following two ways: 1) Most researchers utilize contrastive learning to construct a common subspace where similarities among data can be measured. However, they considers only cross-modal transformation, neglecting the intra-modal separability. Besides, the temperature parameter is not adaptively adjusted along with semantic guidance, which degrades the performance. 2) These methods do not take latent representation reconstruction into account, which is essential for semantic alignment. This paper introduces a novel audio-text oriented CMR approach, termed Contrastive Latent Space Reconstruction Learning (CLSR). CLSR improves contrastive representation learning by taking intra-modal separability into account and adopting an adaptive temperature control strategy. Moreover, the latent representation reconstruction modules are embedded into the CMR framework, which improves modal interaction. Experiments in comparison with some state-of-the-art methods on two audio-text datasets have validated the superiority of CLSR.
引用
收藏
页码:913 / 917
页数:5
相关论文
共 35 条
  • [1] Chechik G, 2008, P 1 ACM INT C MULT I, P105, DOI [DOI 10.1145/1460096.1460115, 10.1145/1460096.1460115]
  • [2] Chen T, 2020, PR MACH LEARN RES, V119
  • [3] Dejie Yang, 2020, ICMR '20: Proceedings of the 2020 International Conference on Multimedia Retrieval, P44, DOI 10.1145/3372278.3390673
  • [4] Deng Y., 2023, P ACM MM
  • [5] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [6] Drossos K, 2020, INT CONF ACOUST SPEE, P736, DOI [10.1109/icassp40776.2020.9052990, 10.1109/ICASSP40776.2020.9052990]
  • [7] Elizalde B, 2019, INT CONF ACOUST SPEE, P4095, DOI 10.1109/ICASSP.2019.8682632
  • [8] Gemmeke JF, 2017, INT CONF ACOUST SPEE, P776, DOI 10.1109/ICASSP.2017.7952261
  • [9] Grill J.-B., 2020, Advances in Neural Information Processing Systems, V33, P21271
  • [10] Collective Reconstructive Embeddings for Cross-Modal Hashing
    Hu, Mengqiu
    Yang, Yang
    Shen, Fumin
    Xie, Ning
    Hong, Richang
    Shen, Heng Tao
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (06) : 2770 - 2784