Deep multi-view representation learning for social images

被引:11
作者
Huang, Feiran [1 ]
Zhang, Xiaoming [2 ]
Zhao, Zhonghua [3 ]
Li, Zhoujun [1 ]
He, Yueying [3 ]
机构
[1] Beihang Univ, State Key Lab Software Dev Environm, Beijing 100191, Peoples R China
[2] Beihang Univ, Sch Cyber Sci & Technol, Beijing 100191, Peoples R China
[3] Coordinat Ctr China, Natl Comp Network Emergency Response Tech Team, Beijing 100029, Peoples R China
基金
中国国家自然科学基金;
关键词
Multi-view learning; Image embedding; Representation learning; Stacked autoencoder;
D O I
10.1016/j.asoc.2018.08.010
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-view representation learning for social images has recently made remarkable achievements in many tasks, such as cross-view classification and cross-modal retrieval. Since social images usually contain link information besides the multi-modal contents (e.g., text description, and visual content), simply employing the data content may result in sub-optimal multi-view representation of the social images. In this paper, we propose a Deep Multi-View Embedding Model (DMVEM) to learn joint embeddings for the three views including the visual content, the associated text descriptions, and their relations. To effectively encode the link information, a weighted relation network is built based on the linkages between social images, which is then embedded into a low dimensional vector space using the Skip-Gram model. The learned vector is regarded as the third view besides the visual content and text description. To learn a joint representation from the three views, a deep learning model with three-branch nonlinear neural network is proposed. A three-view bi-directional loss function is used to capture the correlation between the three views. The stacked autoencoder is adopted to preserve the self-structure and reconstructability of the learned representation for each view. Comprehensive experiments are conducted in the tasks of image-to-text, text-to-image, and image-to-image searches. Compared to the state-of-the-art multi-view embedding methods, our approach achieves significant improvement of performance. (C) 2018 Elsevier B.V. All rights reserved.
引用
收藏
页码:106 / 118
页数:13
相关论文
共 62 条
  • [1] Andrienko G., 2013, Introduction, P1
  • [2] [Anonymous], 2016, INT JOINT C ART INT
  • [3] [Anonymous], 2014, CORR
  • [4] [Anonymous], IEEE T MULTIMED
  • [5] [Anonymous], 2014, CORR
  • [6] [Anonymous], 2016, CORR
  • [7] [Anonymous], PROC CVPR IEEE
  • [8] [Anonymous], 2009, ACM INT C IM VID RET
  • [9] [Anonymous], JMLR WORKSHOP C P
  • [10] [Anonymous], 2015, P 24 ACM INT C INF K