Adversarial Knowledge Transfer from Unlabeled Data

被引:0
作者
Gupta, Akash [1 ]
Panda, Rameswar [2 ]
Paul, Sujoy [1 ]
Zhang, Jianming [3 ]
Roy-Chowdhury, Amit K. [1 ]
机构
[1] Univ Calif Riverside, Riverside, CA 92521 USA
[2] MIT IBM Watson AI Lab, Cambridge, MA USA
[3] Adobe Res, San Jose, CA USA
来源
MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA | 2020年
基金
美国国家科学基金会;
关键词
Adversarial Learning; Knowledge Transfer; Feature Alignment; FRAMEWORK;
D O I
10.1145/3394171.3413688
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While machine learning approaches to visual recognition offer great promise, most of the existing methods rely heavily on the availability of large quantities of labeled training data. However, in the vast majority of real-world settings, manually collecting such large labeled datasets is infeasible due to the cost of labeling data or the paucity of data in a given domain. In this paper, we present a novel Adversarial Knowledge Transfer (AKT) framework for transferring knowledge from internet-scale unlabeled data to improve the performance of a classifier on a given visual recognition task. The proposed adversarial learning framework aligns the feature space of the unlabeled source data with the labeled target data such that the target classifier can be used to predict pseudo labels on the source data. An important novel aspect of our method is that the unlabeled source data can be of different classes from those of the labeled target data, and there is no need to define a separate pretext task, unlike some existing approaches. Extensive experiments well demonstrate that models learned using our approach hold a lot of promise across a variety of visual recognition tasks on multiple standard datasets. Project page is at https://agupt013.github.io/akt.html.
引用
收藏
页码:2175 / 2183
页数:9
相关论文
共 54 条
  • [21] Domain Adaptive Self-Taught Learning for Heterogeneous Face Recognition
    Hou, Cheng-An
    Yang, Min-Chun
    Wang, Yu-Chiang Frank
    [J]. 2014 22ND INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2014, : 3068 - 3073
  • [22] Automatic Understanding of Image and Video Advertisements
    Hussain, Zaeem
    Zhang, Mingda
    Zhang, Xiaozhong
    Ye, Keren
    Thomas, Christopher
    Agha, Zuha
    Ong, Nathan
    Kovashka, Adriana
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1100 - 1110
  • [23] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90
  • [24] Krizhevsky Alex, 2009, University of Toronto
  • [25] Laine S., 2016, ARXIV161002242
  • [26] Learning Representations for Automatic Colorization
    Larsson, Gustav
    Maire, Michael
    Shakhnarovich, Gregory
    [J]. COMPUTER VISION - ECCV 2016, PT IV, 2016, 9908 : 577 - 593
  • [27] LeCun Y., 1998, MNIST DATABASE HANDW
  • [28] Lee Dong-Hyun, 2013, WORKSH CHALL REPR LE, V3, P896
  • [29] Lee Honglak, 2009, 21 INT JOINT C ART I
  • [30] Separate to Adapt: Open Set Domain Adaptation via Progressive Separation
    Liu, Hong
    Cao, Zhangjie
    Long, Mingsheng
    Wang, Jianmin
    Yang, Qiang
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2922 - 2931