Unsupervised Cross-Modal Hashing With Modality-Interaction

被引:22
作者
Tu, Rong-Cheng [1 ,2 ]
Jiang, Jie [3 ]
Lin, Qinghong [4 ]
Cai, Chengfei [3 ]
Tian, Shangxuan [3 ]
Wang, Hongfa [3 ]
Liu, Wei [3 ]
机构
[1] Tencent, Shenzhen 518100, Peoples R China
[2] Beijing Inst Technol, Dept Comp Sci & Technol, Beijing 100081, Peoples R China
[3] Tencent Data Platform, Shenzhen 518051, Guangdong, Peoples R China
[4] Natl Univ Singapore, Elect & Comp Engn, Singapore 138600, Singapore
关键词
Cross-modal Retrieval; Hashing; Modality-interaction; Bit-selection; ATTENTION; NETWORK;
D O I
10.1109/TCSVT.2023.3251395
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Recently, numerous unsupervised cross-modal hashing methods have been proposed to deal the image-text retrieval tasks for the unlabeled cross-modal data. However, when these methods learn to generate hash codes, almost all of them lack modality-interaction in the following two aspects: 1) The instance similarity matrix used to guide the hashing networks training is constructed without image-text interaction, which fails to capture the fine-grained cross-modal cues to elaborately characterize the intrinsic semantic similarity among the datapoints. 2) The binary codes used for quantization loss are inferior because they are generated by directly quantizing a simple combination of continuous hash codes from different modalities without the interaction among these continuous hash codes. Such problems will cause the generated hash codes to be of poor quality and degrade the retrieval performance. Hence, in this paper, we propose a novel Unsupervised Cross-modal Hashing with Modality-interaction, termed UCHM. Specifically, by optimizing a novel hash-similarity-friendly loss, a modality-interaction-enabled (MIE) similarity generator is first trained to generate a superior MIE similarity matrix for the training set. Then, the generated MIE similarity matrix is utilized as guiding information to train the deep hashing networks. Furthermore, during the process of training the hashing networks, a novel bit-selection module is proposed to generate high-quality unified binary codes for the quantization loss with the interaction among continuous codes from different modalities, thereby further enhancing the retrieval performance. Extensive experiments on two widely used datasets show that the proposed UCHM outperforms state-of-the-art techniques on cross-modal retrieval tasks.
引用
收藏
页码:5296 / 5308
页数:13
相关论文
共 65 条
  • [1] Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering
    Anderson, Peter
    He, Xiaodong
    Buehler, Chris
    Teney, Damien
    Johnson, Mark
    Gould, Stephen
    Zhang, Lei
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 6077 - 6086
  • [2] Cross-Modal Hamming Hashing
    Cao, Yue
    Liu, Bin
    Long, Mingsheng
    Wang, Jianmin
    [J]. COMPUTER VISION - ECCV 2018, PT I, 2018, 11205 : 207 - 223
  • [3] Correlation Autoencoder Hashing for Supervised Cross-Modal Search
    Cao, Yue
    Long, Mingsheng
    Wang, Jianmin
    Zhu, Han
    [J]. ICMR'16: PROCEEDINGS OF THE 2016 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2016, : 197 - 204
  • [4] IMRAM: Iterative Matching with Recurrent Attention Memory for Cross-Modal Image-Text Retrieval
    Chen, Hui
    Ding, Guiguang
    Liu, Xudong
    Lin, Zijia
    Liu, Ji
    Han, Jungong
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, : 12652 - 12660
  • [5] SCRATCH: A Scalable Discrete Matrix Factorization Hashing Framework for Cross-Modal Retrieval
    Chen, Zhen-Duo
    Li, Chuan-Xiang
    Luo, Xin
    Nie, Liqiang
    Zhang, Wei
    Xu, Xin-Shun
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (07) : 2262 - 2275
  • [6] Semantic Pre-Alignment and Ranking Learning With Unified Framework for Cross-Modal Retrieval
    Cheng, Qingrong
    Tan, Zhenshan
    Wen, Keyu
    Chen, Cheng
    Gu, Xiaodong
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 6503 - 6516
  • [7] Cho KYHY, 2014, Arxiv, DOI [arXiv:1406.1078, DOI 10.48550/ARXIV.1406.1078]
  • [8] Dejie Yang, 2020, ICMR '20: Proceedings of the 2020 International Conference on Multimedia Retrieval, P44, DOI 10.1145/3372278.3390673
  • [9] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [10] Collective Matrix Factorization Hashing for Multimodal Data
    Ding, Guiguang
    Guo, Yuchen
    Zhou, Jile
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 2083 - 2090