Deep Semantic Multimodal Hashing Network for Scalable Image-Text and Video-Text Retrievals

被引:73
作者
Jin, Lu [1 ]
Li, Zechao [1 ]
Tang, Jinhui [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
基金
中国国家自然科学基金;
关键词
Semantics; Hash functions; Correlation; Task analysis; Videos; Learning systems; Sparse matrices; Deep hashing; hash code; image-text retrieval; semantic information; similarity preserving; video-text retrieval; SEARCH;
D O I
10.1109/TNNLS.2020.2997020
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hashing has been widely applied to multimodal retrieval on large-scale multimedia data due to its efficiency in computation and storage. In this article, we propose a novel deep semantic multimodal hashing network (DSMHN) for scalable image-text and video-text retrieval. The proposed deep hashing framework leverages 2-D convolutional neural networks (CNN) as the backbone network to capture the spatial information for image-text retrieval, while the 3-D CNN as the backbone network to capture the spatial and temporal information for video-text retrieval. In the DSMHN, two sets of modality-specific hash functions are jointly learned by explicitly preserving both intermodality similarities and intramodality semantic labels. Specifically, with the assumption that the learned hash codes should be optimal for the classification task, two stream networks are jointly trained to learn the hash functions by embedding the semantic labels on the resultant hash codes. Moreover, a unified deep multimodal hashing framework is proposed to learn compact and high-quality hash codes by exploiting the feature representation learning, intermodality similarity-preserving learning, semantic label-preserving learning, and hash function learning with different types of loss functions simultaneously. The proposed DSMHN method is a generic and scalable deep hashing framework for both image-text and video-text retrievals, which can be flexibly integrated with different types of loss functions. We conduct extensive experiments for both single-modal- and cross-modal-retrieval tasks on four widely used multimodal-retrieval data sets. Experimental results on both image-text- and video-text-retrieval tasks demonstrate that the DSMHN significantly outperforms the state-of-the-art methods.
引用
收藏
页码:1838 / 1851
页数:14
相关论文
共 80 条
  • [1] Bronstein MM, 2010, PROC CVPR IEEE, P3594, DOI 10.1109/CVPR.2010.5539928
  • [2] Binary Hashing for Approximate Nearest Neighbor Search on Big Data: A Survey
    Cao, Yuan
    Qi, Heng
    Zhou, Wenrui
    Kato, Jien
    Li, Keqiu
    Liu, Xiulong
    Gui, Jie
    [J]. IEEE ACCESS, 2018, 6 : 2039 - 2054
  • [3] Cao Y, 2017, Arxiv, DOI arXiv:1602.06697
  • [4] Cross-Modal Hamming Hashing
    Cao, Yue
    Liu, Bin
    Long, Mingsheng
    Wang, Jianmin
    [J]. COMPUTER VISION - ECCV 2018, PT I, 2018, 11205 : 207 - 223
  • [5] HashGAN: Deep Learning to Hash with Pair Conditional Wasserstein GAN
    Cao, Yue
    Liu, Bin
    Long, Mingsheng
    Wang, Jianmin
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1287 - 1296
  • [6] Deep Visual-Semantic Hashing for Cross-Modal Retrieval
    Cao, Yue
    Long, Mingsheng
    Wang, Jianmin
    Yang, Qiang
    Yu, Philip S.
    [J]. KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 1445 - 1454
  • [7] Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
    Carreira, Joao
    Zisserman, Andrew
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 4724 - 4733
  • [8] SCA-CNN: Spatial and Channel-wise Attention in Convolutional Networks for Image Captioning
    Chen, Long
    Zhang, Hanwang
    Xiao, Jun
    Nie, Liqiang
    Shao, Jian
    Liu, Wei
    Chua, Tat-Seng
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6298 - 6306
  • [9] Chua T.-S., 2009, P ACM INT C IM VID R, P1
  • [10] On the Role of Correlation and Abstraction in Cross-Modal Multimedia Retrieval
    Costa Pereira, Jose
    Coviello, Emanuele
    Doyle, Gabriel
    Rasiwasia, Nikhil
    Lanckriet, Gert R. G.
    Levy, Roger
    Vasconcelos, Nuno
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2014, 36 (03) : 521 - 535