Learning multi-task local metrics for image annotation

被引:0
|
作者
Xing Xu
Atsushi Shimada
Hajime Nagahara
Rin-ichiro Taniguchi
机构
[1] Kyushu University,Department of Advanced Information and Technology
来源
Multimedia Tools and Applications | 2016年 / 75卷
关键词
Image annotation; Label prediction; Metric learning; Local metric; Multi-task learning;
D O I
暂无
中图分类号
学科分类号
摘要
The goal of image annotation is to automatically assign a set of textual labels to an image to describe the visual contents thereof. Recently, with the rapid increase in the number of web images, nearest neighbor (NN) based methods have become more attractive and have shown exciting results for image annotation. One of the key challenges of these methods is to define an appropriate similarity measure between images for neighbor selection. Several distance metric learning (DML) algorithms derived from traditional image classification problems have been applied to annotation tasks. However, a fundamental limitation of applying DML to image annotation is that it learns a single global distance metric over the entire image collection and measures the distance between image pairs in the image-level. For multi-label annotation problems, it may be more reasonable to measure similarity of image pairs in the label-level. In this paper, we develop a novel label prediction scheme utilizing multiple label-specific local metrics for label-level similarity measure, and propose two different local metric learning methods in a multi-task learning (MTL) framework. Extensive experimental results on two challenging annotation datasets demonstrate that 1) utilizing multiple local distance metrics to learn label-level distances is superior to using a single global metric in label prediction, and 2) the proposed methods using the MTL framework to learn multiple local metrics simultaneously can model the commonalities of labels, thereby facilitating label prediction results to achieve state-of-the-art annotation performance.
引用
收藏
页码:2203 / 2231
页数:28
相关论文
共 50 条
  • [21] A Survey on Multi-Task Learning
    Zhang, Yu
    Yang, Qiang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (12) : 5586 - 5609
  • [22] Boosted multi-task learning
    Chapelle, Olivier
    Shivaswamy, Pannagadatta
    Vadrevu, Srinivas
    Weinberger, Kilian
    Zhang, Ya
    Tseng, Belle
    MACHINE LEARNING, 2011, 85 (1-2) : 149 - 173
  • [23] Parallel Multi-Task Learning
    Zhang, Yu
    2015 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2015, : 629 - 638
  • [24] An overview of multi-task learning
    Yu Zhang
    Qiang Yang
    National Science Review, 2018, 5 (01) : 30 - 43
  • [25] Metric-Guided Multi-task Learning
    Ren, Jinfu
    Liu, Yang
    Liu, Jiming
    FOUNDATIONS OF INTELLIGENT SYSTEMS (ISMIS 2020), 2020, 12117 : 21 - 31
  • [26] Local Rademacher Complexity-based Learning Guarantees for Multi-Task Learning
    Yousefi, Niloofar
    Lei, Yunwen
    Kloft, Marius
    Mollaghasemi, Mansooreh
    Anagnostopoulos, Georgios C.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2018, 19
  • [27] Geometry preserving multi-task metric learning
    Yang, Peipei
    Huang, Kaizhu
    Liu, Cheng-Lin
    MACHINE LEARNING, 2013, 92 (01) : 133 - 175
  • [28] Geometry preserving multi-task metric learning
    Peipei Yang
    Kaizhu Huang
    Cheng-Lin Liu
    Machine Learning, 2013, 92 : 133 - 175
  • [29] MULTI-TASK DISTILLATION: TOWARDS MITIGATING THE NEGATIVE TRANSFER IN MULTI-TASK LEARNING
    Meng, Ze
    Yao, Xin
    Sun, Lifeng
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 389 - 393
  • [30] Task Variance Regularized Multi-Task Learning
    Mao, Yuren
    Wang, Zekai
    Liu, Weiwei
    Lin, Xuemin
    Hu, Wenbin
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (08) : 8615 - 8629