Deep Coupled Metric Learning for Cross-Modal Matching

被引:102
作者
Liong, Venice Erin [1 ]
Lu, Jiwen [2 ]
Tan, Yap-Peng [3 ]
Zhou, Jie [2 ]
机构
[1] Nanyang Technol Univ, Interdisciplinary Grad Sch, Rapid Rich Object Search Lab, Singapore 639798, Singapore
[2] Tsinghua Univ, State Key Lab Intelligent Technol & Syst, Beijing 100084, Peoples R China
[3] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
基金
中国国家自然科学基金;
关键词
Coupled learning; cross-modal matching; deep model; metric learning; multimedia retrieval; SPECTRAL REGRESSION; FACE; IMAGES;
D O I
10.1109/TMM.2016.2646180
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we propose a new deep coupled metric learning (DCML) method for cross-modal matching, which aims to match samples captured from two different modalities (e.g., texts versus images, visible versus near infrared images). Unlike existing cross-modal matching methods which learn a linear common space to reduce the modality gap, our DCML designs two feedforward neural networks which learn two sets of hierarchical nonlinear transformations (one set for each modality) to nonlinearly map samples from different modalities into a shared latent feature subspace, under which the intraclass variation is minimized and the interclass variation is maximized, and the difference of each data pair captured from two modalities of the same class is minimized, respectively. Experimental results on four different cross-modal matching datasets validate the efficacy of the proposed approach.
引用
收藏
页码:1234 / 1244
页数:11
相关论文
共 59 条
[1]  
Andrienko G., 2013, Introduction, P1
[2]  
[Anonymous], 2010, P 18 ACM INT C MULT
[3]  
[Anonymous], 2015, P INT C INF KNOWL MA
[4]  
[Anonymous], P 3 INT WORKSH CONT
[5]  
[Anonymous], 2008, P 8 IEEE INT C AUT F
[6]  
[Anonymous], FACEVACS SOFTWARE DE
[7]  
[Anonymous], 2013, P IEEE 6 INT C BIOM
[8]  
[Anonymous], 2012, AS C COMP VIS
[9]  
[Anonymous], 2013, Advances in Neural Information Processing Systems
[10]  
Ballan L, 2014, P INT C MULTIMEDIA R, P73