Click-through-based Subspace Learning for Image Search

被引:10
|
作者
Pan, Yingwei [1 ]
Yao, Ting [2 ]
Tian, Xinmei [1 ]
Li, Houqiang [1 ]
Ngo, Chong-Wah [2 ]
机构
[1] Univ Sci & Technol China, Hefei, Anhui, Peoples R China
[2] City Univ Hong Kong, Kowloon, Hong Kong, Peoples R China
来源
PROCEEDINGS OF THE 2014 ACM CONFERENCE ON MULTIMEDIA (MM'14) | 2014年
基金
中国国家自然科学基金;
关键词
Image search; subspace learning; click-through data; DNN image representation;
D O I
10.1145/2647868.2656404
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
One of the fundamental problems in image search is to rank image documents according to a given textual query. We address two limitations of the existing image search engines in this paper. First, there is no straightforward way of comparing textual keywords with visual image content. Image search engines therefore highly depend on the surrounding texts, which are often noisy or too few to accurately describe the image content. Second, ranking functions are trained on query-image pairs labeled by human labelers, making the annotation intellectually expensive and thus cannot be scaled up. We demonstrate that the above two fundamental challenges can be mitigated by jointly exploring the subspace learning and the use of click-through data. The former aims to create a latent subspace with the ability in comparing information from the original incomparable views (i.e., textual and visual views), while the latter explores the largely available and freely accessible click-through data (i.e., "crowdsourced" human intelligence) for understanding query. Specifically, we investigate a series of click-throughbased subspace learning techniques (CSL) for image search. We conduct experiments on MSR-Bing Grand Challenge and the final evaluation performance achieves .DCG@25 = 0.47225. Moreover, the feature dimension is significantly reduced by several orders of magnitude (e.g., from thousands to tens).
引用
收藏
页码:233 / 236
页数:4
相关论文
共 50 条
  • [21] Joint subspace learning and subspace clustering based unsupervised feature selection
    Xiao, Zijian
    Chen, Hongmei
    Mi, Yong
    Luo, Chuan
    Horng, Shi-Jinn
    Li, Tianrui
    NEUROCOMPUTING, 2025, 635
  • [22] Learning to rank with click-through features in a reinforcement learning framework
    Keyhanipour, Amir Hosein
    Moshiri, Behzad
    Piroozmand, Maryam
    Oroumchian, Farhad
    Moeini, Ali
    INTERNATIONAL JOURNAL OF WEB INFORMATION SYSTEMS, 2016, 12 (04) : 448 - 476
  • [23] Grassmannian learning mutual subspace method for image set recognition
    Souza, Lincon S.
    Sogi, Naoya
    Gatto, Bernardo B.
    Kobayashi, Takumi
    Fukui, Kazuhiro
    NEUROCOMPUTING, 2023, 517 : 20 - 33
  • [24] Subspace Learning Network: An Efficient ConvNet for PolSAR Image Classification
    Guo, Jun
    Wang, Ling
    Nu, Daiyin
    Hu, Chang-Yu
    Xue, Chen-Yan
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2019, 16 (12) : 1849 - 1853
  • [25] Robust and adaptive subspace learning for fast hyperspectral image denoising
    Wu, Yue
    Li, Weisheng
    APPLIED INTELLIGENCE, 2024, 54 (22) : 11400 - 11411
  • [26] Image Structure Subspace Learning Using Structural Similarity Index
    Ghojogh, Benyamin
    Karray, Fakhri
    Crowley, Mark
    IMAGE ANALYSIS AND RECOGNITION, ICIAR 2019, PT I, 2019, 11662 : 33 - 44
  • [27] IntentSearch: Capturing User Intention for One-Click Internet Image Search
    Tang, Xiaoou
    Liu, Ke
    Cui, Jingyu
    Wen, Fang
    Wang, Xiaogang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (07) : 1342 - 1353
  • [28] Filtering objectionable information access based on click-through behaviours with deep learning methods
    Lee, Lung-Hao
    Li, Jian-Hong
    Ku, Szu-Wei
    Tseng, Yuen-Hsien
    JOURNAL OF INFORMATION SCIENCE, 2023,
  • [30] Cross-scene hyperspectral image classification based on DWT and manifold-constrained subspace learning
    Ye, Minchao
    Zheng, Wenbin
    Lu, Huijuan
    Zeng, Xianting
    Qian, Yuntao
    INTERNATIONAL JOURNAL OF WAVELETS MULTIRESOLUTION AND INFORMATION PROCESSING, 2017, 15 (06)