Modality-Dependent Cross-Media Retrieval

被引:75
作者
Wei, Yunchao [1 ,3 ]
Zhao, Yao [1 ,3 ]
Zhu, Zhenfeng [1 ,3 ]
Wei, Shikui [1 ,3 ,4 ]
Xiao, Yanhui [1 ,3 ]
Feng, Jiashi [2 ]
Yan, Shuicheng [2 ]
机构
[1] Beijing Jiaotong Univ, Inst Informat Sci, Beijing 100044, Peoples R China
[2] Natl Univ Singapore, Dept Elect & Comp Engn, Singapore 117583, Singapore
[3] Beijing Key Lab Adv Informat Sci & Network Techno, Beijing 100044, Peoples R China
[4] China Three Gorges Univ, Hubei Key Lab Intelligent Vis Based Monitoring Hy, Yichang 443002, Hubei, Peoples R China
关键词
Design; Algorithms; Performance; Cross-media retrieval; subspace learning; canonical correlation analysis;
D O I
10.1145/2775109
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this article, we investigate the cross-media retrieval between images and text, that is, using image to search text (I2T) and using text to search images (T2I). Existing cross-media retrieval methods usually learn one couple of projections, by which the original features of images and text can be projected into a common latent space to measure the content similarity. However, using the same projections for the two different retrieval tasks (I2T and T2I) may lead to a tradeoff between their respective performances, rather than their best performances. Different from previous works, we propose a modality-dependent cross-media retrieval (MDCR) model, where two couples of projections are learned for different cross-media retrieval tasks instead of one couple of projections. Specifically, by jointly optimizing the correlation between images and text and the linear regression from one modal space (image or text) to the semantic space, two couples of mappings are learned to project images and text from their original feature spaces into two common latent subspaces (one for I2T and the other for T2I). Extensive experiments show the superiority of the proposed MDCR compared with other methods. In particular, based on the 4,096-dimensional convolutional neural network (CNN) visual feature and 100-dimensional Latent Dirichlet Allocation (LDA) textual feature, the mAP of the proposed method achieves the mAP score of 41.5%, which is a new state-of-the-art performance on the Wikipedia dataset.
引用
收藏
页数:13
相关论文
共 27 条
  • [1] [Anonymous], 2010, P NAACL HLT 2010 WOR
  • [2] [Anonymous], 2010, P 18 ACM INT C MULT
  • [3] [Anonymous], 2013, P ACM INT C MULTIMED
  • [4] [Anonymous], ARXIV14114738
  • [5] [Anonymous], INT J COMPUTER VISIO
  • [6] Latent Dirichlet allocation
    Blei, DM
    Ng, AY
    Jordan, MI
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2003, 3 (4-5) : 993 - 1022
  • [7] Frome A., 2013, ADV NEURAL INFORM PR, P2121, DOI DOI 10.5555/2999792.2999849
  • [8] Canonical correlation analysis: An overview with application to learning methods
    Hardoon, DR
    Szedmak, S
    Shawe-Taylor, J
    [J]. NEURAL COMPUTATION, 2004, 16 (12) : 2639 - 2664
  • [9] Hwang Sj., 2010, P BRIT MACHINE VISIO, P1
  • [10] Improving web image search results using query-relative classifiers
    Krapac, Josip
    Allan, Moray
    Verbeek, Jakob
    Jurie, Frederic
    [J]. 2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, : 1094 - 1101