Deep Binary Reconstruction for Cross-Modal Hashing

被引:107
|
作者
Hu, Di [1 ]
Nie, Feiping [1 ]
Li, Xuelong [2 ,3 ,4 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci & Engn, Xian 710072, Shaanxi, Peoples R China
[2] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Shaanxi, Peoples R China
[3] Northwestern Polytech Univ, Ctr OPT IMagery Anal & Learning OPTIMAL, Xian 710072, Shaanxi, Peoples R China
[4] Chinese Acad Sci, Xian Inst Opt & Precis Mech, Xian 710119, Shaanxi, Peoples R China
基金
中国国家自然科学基金;
关键词
Cross-modal hashing; binary reconstruction; IMAGE; CODES;
D O I
10.1109/TMM.2018.2866771
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To satisfy the huge storage space and organization capacity requirements in addressing big multimodal data, hashing techniques have been widely employed to learn binary representations in cross-modal retrieval tasks. However, optimizing the hashing objective under the necessary binary constraint is truly a difficult problem. A common strategy is to relax the constraint and perform individual binarizations over the learned real-valued representations. In this paper, in contrast to conventional two-stage methods, we propose to directly learn the binary codes, where the model can be easily optimized by a standard gradient descent optimizer. However, before that, we present a theoretical guarantee of the effectiveness of the multimodal network in preserving the inter-and intra-modal consistencies. Based on this guarantee, a novel multimodal deep binary reconstruction model is proposed, which can be trained to simultaneously model the correlation across modalities and learn the binary hashing codes. To generate binary codes and to avoid the tiny gradient problem, a novel activation function first scales the input activations to suitable scopes and, then, feeds them to the tanh function to build the hashing layer. Such a composite function is named adaptive tanh. Both linear and nonlinear scaling methods are proposed and shown to generate efficient codes after training the network. Extensive ablation studies and comparison experiments are conducted for the image2text and text2image retrieval tasks; the method is found to outperform several state-of-the-art deep-learning methods with respect to different evaluation metrics.
引用
收藏
页码:973 / 985
页数:13
相关论文
共 50 条
  • [1] Deep Binary Reconstruction for Cross-modal Hashing
    Li, Xuelong
    Hu, Di
    Nie, Feiping
    PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), 2017, : 1398 - 1406
  • [2] Deep Cross-Modal Hashing
    Jiang, Qing-Yuan
    Li, Wu-Jun
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3270 - 3278
  • [3] Deep Cross-Modal Proxy Hashing
    Tu, Rong-Cheng
    Mao, Xian-Ling
    Tu, Rong-Xin
    Bian, Binbin
    Cai, Chengfei
    Wang, Hongfa
    Wei, Wei
    Huang, Heyan
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (07) : 6798 - 6810
  • [4] Deep Lifelong Cross-Modal Hashing
    Xu, Liming
    Li, Hanqi
    Zheng, Bochuan
    Li, Weisheng
    Lv, Jiancheng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (12) : 13478 - 13493
  • [5] Semantic deep cross-modal hashing
    Lin, Qiubin
    Cao, Wenming
    He, Zhihai
    He, Zhiquan
    NEUROCOMPUTING, 2020, 396 (396) : 113 - 122
  • [6] Asymmetric Deep Cross-modal Hashing
    Gu, Jingzi
    Zhang, JinChao
    Lin, Zheng
    Li, Bo
    Wang, Weiping
    Meng, Dan
    COMPUTATIONAL SCIENCE - ICCS 2019, PT V, 2019, 11540 : 41 - 54
  • [7] Cross-Modal Deep Variational Hashing
    Liong, Venice Erin
    Lu, Jiwen
    Tan, Yap-Peng
    Zhou, Jie
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 4097 - 4105
  • [8] Deep medical cross-modal attention hashing
    Zhang, Yong
    Ou, Weihua
    Shi, Yufeng
    Deng, Jiaxin
    You, Xinge
    Wang, Anzhi
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2022, 25 (04): : 1519 - 1536
  • [9] Unsupervised Deep Fusion Cross-modal Hashing
    Huang, Jiaming
    Min, Chen
    Jing, Liping
    ICMI'19: PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2019, : 358 - 366
  • [10] Deep medical cross-modal attention hashing
    Yong Zhang
    Weihua Ou
    Yufeng Shi
    Jiaxin Deng
    Xinge You
    Anzhi Wang
    World Wide Web, 2022, 25 : 1519 - 1536