Multi-Level Semantic Feature Augmentation for One-Shot Learning

被引:207
作者
Chen, Zitian [1 ]
Fu, Yanwei [1 ]
Zhang, Yinda [2 ]
Jiang, Yu-Gang [3 ,4 ]
Xue, Xiangyang [1 ]
Sigal, Leonid [5 ]
机构
[1] Fudan Univ, Sch Data Sci, Shanghai 200433, Peoples R China
[2] Google LLC, Menlo Pk, CA 94043 USA
[3] Fudan Univ, Sch Comp Sci, Shanghai 200433, Peoples R China
[4] Jilian Technol Grp Video, Shanghai 200433, Peoples R China
[5] Univ British Columbia, Dept Comp Sci, Vancouver, BC, Canada
关键词
One-shot learning; feature augmentation; OBJECT; CLASSIFICATION;
D O I
10.1109/TIP.2019.2910052
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The ability to quickly recognize and learn new visual concepts from limited samples enable humans to quickly adapt to new tasks and environments. This ability is enabled by the semantic association of novel concepts with those that have already been learned and stored in memory. Computers can start to ascertain similar abilities by utilizing a semantic concept space. A concept space is a high-dimensional semantic space in which similar abstract concepts appear close and dissimilar ones far apart. In this paper, we propose a novel approach to one-shot learning that builds on this core idea. Our approach learns to map a novel sample instance to a concept, relates that concept to the existing ones in the concept space and, using these relationships, generates new instances, by interpolating among the concepts, to help learning. Instead of synthesizing new image instance, we propose to directly synthesize instance features by leveraging semantics using a novel auto-encoder network called dual TriNet. The encoder part of the TriNet learns to map multi-layer visual features from CNN to a semantic vector. In semantic space, we search for related concepts, which are then projected back into the image feature spaces by the decoder portion of the TriNet. Two strategies in the semantic space are explored. Notably, this seemingly simple strategy results in complex augmented feature distributions in the image feature space, leading to substantially better performance.
引用
收藏
页码:4594 / 4605
页数:12
相关论文
共 94 条
[1]  
Amit Y., 2007, P 24 INT C MACH LEAR, V24, P17
[2]  
[Anonymous], EUR C COMP VIS ECCV
[3]  
[Anonymous], 2017, CVPR
[4]  
[Anonymous], P BMVC
[5]  
[Anonymous], 2017, Least squares generative adversarial networks
[6]  
[Anonymous], 2009, BRIT MACHINE VISION, DOI [10.5244/C.23.80, DOI 10.5244/C.23.80]
[7]  
[Anonymous], PROC CVPR IEEE
[8]  
[Anonymous], P ICML
[9]  
[Anonymous], P NIPS
[10]  
[Anonymous], 2011, NIPS