Semantic Disentanglement Adversarial Hashing for Cross-Modal Retrieval

被引:9
作者
Meng, Min [1 ]
Sun, Jiaxuan [1 ]
Liu, Jigang [2 ]
Yu, Jun [1 ]
Wu, Jigang [1 ]
机构
[1] Guangdong Univ Technol, Sch Comp Sci, Guangzhou 510006, Peoples R China
[2] Ping An Life Insurance China, Shenzhen 518046, Peoples R China
基金
中国国家自然科学基金;
关键词
Cross-modal retrieval; hashing; adversarial learning; disentangled representation; REPRESENTATION; NETWORK;
D O I
10.1109/TCSVT.2023.3293104
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Cross-modal hashing has gained considerable attention in cross-modal retrieval due to its low storage cost and prominent computational efficiency. However, preserving more semantic information in the compact hash codes to bridge the modality gap still remains challenging. Most existing methods unconsciously neglect the influence of modality-private information on semantic embedding discrimination, leading to unsatisfactory retrieval performance. In this paper, we propose a novel deep cross-modal hashing method, called Semantic Disentanglement Adversarial Hashing (SDAH), to tackle these challenges for cross-modal retrieval. Specifically, SDAH is designed to decouple the original features of each modality into modality-common features with semantic information and modality-private features with disturbing information. After the preliminary decoupling, the modality-private features are shuffled and treated as positive interactions to enhance the learning of modality-common features, which can significantly boost the discriminative and robustness of semantic embeddings. Moreover, the variational information bottleneck is introduced in the hash feature learning process, which can avoid the loss of a large amount of semantic information caused by the high-dimensional feature compression. Finally, the discriminative and compact hash codes can be computed directly from the hash features. A large number of comparative and ablation experiments show that SDAH achieves superior performance than other state-of-the-art methods.
引用
收藏
页码:1914 / 1926
页数:13
相关论文
共 53 条
  • [1] Alemi A. A., 2018, UNCERTAINTY DEEP LEA
  • [2] [Anonymous], 2008, P 22 ANN C NEUR INF
  • [3] [Anonymous], 2011, IJCAI, DOI DOI 10.5591/978-1-57735-516-8/IJCAI11-23
  • [4] Representation Learning: A Review and New Perspectives
    Bengio, Yoshua
    Courville, Aaron
    Vincent, Pascal
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) : 1798 - 1828
  • [5] The devil is in the details: an evaluation of recent feature encoding methods
    Chatfield, Ken
    Lempitsky, Victor
    Vedaldi, Andrea
    Zisserman, Andrew
    [J]. PROCEEDINGS OF THE BRITISH MACHINE VISION CONFERENCE 2011, 2011,
  • [6] Chua T., 2009, P ACM INT C IMAGE VI
  • [7] Cong Bai, 2020, ICMR '20: Proceedings of the 2020 International Conference on Multimedia Retrieval, P525, DOI 10.1145/3372278.3390711
  • [8] Disentangling Task-Oriented Representations for Unsupervised Domain Adaptation
    Dai, Pingyang
    Chen, Peixian
    Wu, Qiong
    Hong, Xiaopeng
    Ye, Qixiang
    Tian, Qi
    Chia-Wen Lin
    Ji, Rongrong
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 1012 - 1026
  • [9] Adversarial Graph Convolutional Network for Cross-Modal Retrieval
    Dong, Xinfeng
    Liu, Li
    Zhu, Lei
    Nie, Liqiang
    Zhang, Huaxiang
    [J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (03) : 1634 - 1645
  • [10] Fischer I., 2017, Proceedings of the 5th International Conference on Learning Representations