Critical direction projection networks for few-shot learning

被引:7
作者
Bi, Sheng [1 ,2 ]
Wang, Yongxing [1 ]
Li, Xiaoxiao [1 ]
Dong, Min [1 ]
Zhu, Jinhui [3 ]
机构
[1] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510006, Peoples R China
[2] Shenzhen Acad Robot, Shenzhen 518000, Guangdong, Peoples R China
[3] South China Univ Technol, Sch Software Engn, Guangzhou 510006, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep learning; Few-shot learning; Point cloud; 3D object classification;
D O I
10.1007/s10489-020-02110-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the development of deep learning, visual systems perform better than human beings in many classification tasks. However, the scarcity of labelled data is the most critical problem in such visual systems. Few-shot learning is adopted to tackle this problem, wherein a classifier should acquire the ability to identify some class is not in the training data when given only a few examples. In this paper, critical direction projection (CDP) networks are proposed for few-shot learning. Basically, two crucial steps are involved in CDP: The first step is to find the critical directions for each category in the embedding space, and the second step is to measure the similarity between samples and critical directions according to the projection length. It emerges that CDP networks can be effectively compatible with existing classification networks and achieve state-of-the-art performance on several benchmark datasets. Moreover, CDP achieves outstanding performance both on 2D image and 3D object classification. This study is a new attempt to achieve 3D object classification in a few-shot learning scenario. To summarize, our major research contributions are as follows: 1) a novel metric learning method, CDP, is proposed; 2) a new feature extraction module, EffNet, is introduced; and 3) a benchmark for few-shot 3D object classification is provided.
引用
收藏
页码:5400 / 5413
页数:14
相关论文
共 42 条
  • [21] Deep Learning on Point Clouds and Its Application: A Survey
    Liu, Weiping
    Sun, Jia
    Li, Wanyi
    Hu, Ting
    Wang, Peng
    [J]. SENSORS, 2019, 19 (19)
  • [22] Munkhdalai Tsendsuren, 2017, Proc Mach Learn Res, V70, P2554
  • [23] Oreshkin BN, 2018, ADV NEUR IN, V31
  • [24] Qi, 2017, ADV NEURAL INFORM PR, P5105, DOI DOI 10.1109/CVPR.2017.16
  • [25] Qi CR, 2017, ADV NEUR IN, V30
  • [26] Ravi S., 2017, Optimization as a model for few-shot learning
  • [27] Santoro A., 2016, INT C MACH LEARN, P1842
  • [28] Schroff F, 2015, PROC CVPR IEEE, P815, DOI 10.1109/CVPR.2015.7298682
  • [29] Snell J, 2017, ADV NEUR IN, V30
  • [30] Learning to Compare: Relation Network for Few-Shot Learning
    Sung, Flood
    Yang, Yongxin
    Zhang, Li
    Xiang, Tao
    Torr, Philip H. S.
    Hospedales, Timothy M.
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1199 - 1208