Deep Semantic Space with Intra-class Low-rank Constraint for Cross-modal Retrieval

被引:9
|
作者
Kang, Peipei [1 ]
Lin, Zehang [1 ]
Yang, Zhenguo [1 ,2 ]
Fang, Xiaozhao [3 ]
Li, Qing [4 ]
Liu, Wenyin [1 ]
机构
[1] Guangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou, Guangdong, Peoples R China
[2] City Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
[3] Guangdong Univ Technol, Dept Automat, Guangzhou, Guangdong, Peoples R China
[4] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
来源
ICMR'19: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL | 2019年
基金
中国国家自然科学基金;
关键词
cross-modal retrieval; deep neural networks; intra-class low-rank; semantic space;
D O I
10.1145/3323873.3325029
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In this paper, a novel Deep Semantic Space learning model with Intra-class Low-rank constraint (DSSIL) is proposed for cross-modal retrieval, which is composed of two subnetworks for modality-specific representation learning, followed by projection layers for common space mapping. In particular, DSSIL takes into account semantic consistency to fuse the cross-modal data in a high-level common space, and constrains the common representation matrix within the same class to be low-rank, in order to induce the intra-class representations more relevant. More formally, two regularization terms are devised for the two aspects, which have been incorporated into the objective of DSSIL. To optimize the modality-specific subnetworks and the projection layers simultaneously by exploiting the gradient decent directly, we approximate the nonconvex low-rank constraint by minimizing a few smallest singular values of the intra-class matrix with theoretical analysis. Extensive experiments conducted on three public datasets demonstrate the competitive superiority of DSSIL for cross-modal retrieval compared with the state-of-the-art methods.
引用
收藏
页码:226 / 234
页数:9
相关论文
共 50 条
  • [41] Deep Semantic Correlation Learning based Hashing for Multimedia Cross-Modal Retrieval
    Gong, Xiaolong
    Huang, Linpeng
    Wang, Fuwei
    2018 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2018, : 117 - 126
  • [42] Multi-attention based semantic deep hashing for cross-modal retrieval
    Liping Zhu
    Gangyi Tian
    Bingyao Wang
    Wenjie Wang
    Di Zhang
    Chengyang Li
    Applied Intelligence, 2021, 51 : 5927 - 5939
  • [43] Adversarial Cross-Modal Retrieval Based on Association Constraint
    Guo Q.
    Qian Y.
    Liang X.
    Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2021, 34 (01): : 68 - 76
  • [44] Learning Low-Rank Class-Specific Dictionary and Sparse Intra-Class Variant Dictionary for Face Recognition
    Tang, Xin
    Feng, Guo-can
    Li, Xiao-xin
    Cai, Jia-xin
    PLOS ONE, 2015, 10 (11):
  • [45] Cross-Modal Retrieval Using Deep Learning
    Malik, Shaily
    Bhardwaj, Nikhil
    Bhardwaj, Rahul
    Kumar, Saurabh
    PROCEEDINGS OF THIRD DOCTORAL SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE, DOSCI 2022, 2023, 479 : 725 - 734
  • [46] Generalized Semantic Preserving Hashing for Cross-Modal Retrieval
    Mandal, Devraj
    Chaudhury, Kunal N.
    Biswas, Soma
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (01) : 102 - 112
  • [47] Deep Relation Embedding for Cross-Modal Retrieval
    Zhang, Yifan
    Zhou, Wengang
    Wang, Min
    Tian, Qi
    Li, Houqiang
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 617 - 627
  • [48] A Scalable Architecture for Cross-Modal Semantic Annotation and Retrieval
    Moeller, Manuel
    Sintek, Michael
    KI 2008: ADVANCES IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2008, 5243 : 391 - 392
  • [49] Semantic-Guided Hashing for Cross-Modal Retrieval
    Chen, Zhikui
    Du, Jianing
    Zhong, Fangming
    Chen, Shi
    2019 IEEE FIFTH INTERNATIONAL CONFERENCE ON BIG DATA COMPUTING SERVICE AND APPLICATIONS (IEEE BIGDATASERVICE 2019), 2019, : 182 - 190
  • [50] Semantic ranking structure preserving for cross-modal retrieval
    Liu, Hui
    Feng, Yong
    Zhou, Mingliang
    Qiang, Baohua
    APPLIED INTELLIGENCE, 2021, 51 (03) : 1802 - 1812