Unsupervised Cross-Modal Hashing With Modality-Interaction

被引:43
作者
Tu, Rong-Cheng [1 ,2 ]
Jiang, Jie [3 ]
Lin, Qinghong [4 ]
Cai, Chengfei [3 ]
Tian, Shangxuan [3 ]
Wang, Hongfa [3 ]
Liu, Wei [3 ]
机构
[1] Tencent, Shenzhen 518100, Peoples R China
[2] Beijing Inst Technol, Dept Comp Sci & Technol, Beijing 100081, Peoples R China
[3] Tencent Data Platform, Shenzhen 518051, Guangdong, Peoples R China
[4] Natl Univ Singapore, Elect & Comp Engn, Singapore 138600, Singapore
关键词
Cross-modal Retrieval; Hashing; Modality-interaction; Bit-selection; ATTENTION; NETWORK;
D O I
10.1109/TCSVT.2023.3251395
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Recently, numerous unsupervised cross-modal hashing methods have been proposed to deal the image-text retrieval tasks for the unlabeled cross-modal data. However, when these methods learn to generate hash codes, almost all of them lack modality-interaction in the following two aspects: 1) The instance similarity matrix used to guide the hashing networks training is constructed without image-text interaction, which fails to capture the fine-grained cross-modal cues to elaborately characterize the intrinsic semantic similarity among the datapoints. 2) The binary codes used for quantization loss are inferior because they are generated by directly quantizing a simple combination of continuous hash codes from different modalities without the interaction among these continuous hash codes. Such problems will cause the generated hash codes to be of poor quality and degrade the retrieval performance. Hence, in this paper, we propose a novel Unsupervised Cross-modal Hashing with Modality-interaction, termed UCHM. Specifically, by optimizing a novel hash-similarity-friendly loss, a modality-interaction-enabled (MIE) similarity generator is first trained to generate a superior MIE similarity matrix for the training set. Then, the generated MIE similarity matrix is utilized as guiding information to train the deep hashing networks. Furthermore, during the process of training the hashing networks, a novel bit-selection module is proposed to generate high-quality unified binary codes for the quantization loss with the interaction among continuous codes from different modalities, thereby further enhancing the retrieval performance. Extensive experiments on two widely used datasets show that the proposed UCHM outperforms state-of-the-art techniques on cross-modal retrieval tasks.
引用
收藏
页码:5296 / 5308
页数:13
相关论文
共 65 条
[41]  
Simonyan K, 2015, Arxiv, DOI [arXiv:1409.1556, DOI 10.48550/ARXIV.1409.1556]
[42]  
Song Jingkuan., 2013, SIGMOD, DOI DOI 10.1145/2463676.2465274
[43]   Deep Joint-Semantics Reconstructing Hashing for Large-Scale Unsupervised Cross-Modal Retrieval [J].
Su, Shupeng ;
Zhong, Zhisheng ;
Zhang, Chao .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :3027-3035
[44]  
Tu RC, 2022, Arxiv, DOI arXiv:2209.11475
[45]   Deep Cross-Modal Proxy Hashing [J].
Tu, Rong-Cheng ;
Mao, Xian-Ling ;
Tu, Rong-Xin ;
Bian, Binbin ;
Cai, Chengfei ;
Wang, Hongfa ;
Wei, Wei ;
Huang, Heyan .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (07) :6798-6810
[46]  
Tu RC, 2021, FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, P743
[47]   Partial-Softmax Loss based Deep Hashing [J].
Tu, Rong-Cheng ;
Mao, Xian-Ling ;
Guo, Jia-Nan ;
Wei, Wei ;
Huang, Heyan .
PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, :2869-2878
[48]   Deep Cross-Modal Hashing With Hashing Functions and Unified Hash Codes Jointly Learning [J].
Tu, Rong-Cheng ;
Mao, Xian-Ling ;
Ma, Bing ;
Hu, Yong ;
Yan, Tan ;
Wei, Wei ;
Huang, Heyan .
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (02) :560-572
[49]  
Tu RC, 2020, PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P3466
[50]   Adversarial Cross-Modal Retrieval [J].
Wang, Bokun ;
Yang, Yang ;
Xu, Xing ;
Hanjalic, Alan ;
Shen, Heng Tao .
PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), 2017, :154-162