Interpretable Few-Shot Learning with Contrastive Constraint

被引:0
|
作者
Zhang L. [1 ]
Chen Y. [1 ]
Wu W. [1 ]
Wei B. [1 ]
Luo X. [1 ]
Chang X. [2 ]
Liu J. [1 ]
机构
[1] School of Computer Science and Technology, Xi'an Jiaotong University, Xi'an
[2] School of Computing Technologies, Royal Melbourne Institute of Technology University, Melbourne
来源
Jisuanji Yanjiu yu Fazhan/Computer Research and Development | 2021年 / 58卷 / 12期
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Contrastive learning; Few-shot learning; Image recognition; Interpretable analysis; Local descriptor;
D O I
10.7544/issn1000-1239.2021.20210999
中图分类号
学科分类号
摘要
Different from deep learning with large scale supervision, few-shot learning aims to learn the samples' characteristics from a few labeled examples. Apparently, few-shot learning is more in line with the visual cognitive mechanism of the human brain. In recent years, few-shot learning has attracted more researchers' attention. In order to discover the semantic similarities between the query set (unlabeled image) and support set (few labeled images) in feature embedding space, methods which combine meta-learning and metric learning have emerged and achieved great performance on few-shot image classification tasks. However, these methods lack the interpretability, which means they could not provide a reasoning explainable process like human cognitive mechanism. Therefore, we propose a novel interpretable few-shot learning method called INT-FSL based on the positional attention mechanism, which aims to reveal two key problems in few-shot classification: 1)Which parts of the unlabeled image play an important role in classification task; 2)Which class of features reflected by the key parts. Besides, we design the contrastive constraints on global and local levels in every few-shot meta task, for alleviating the limited supervision with the internal information of the data. We conduct extensive experiments on three image benchmark datasets. The results show that the proposed model INT-FSL not only could improve the classification performance on few-shot learning effectively, but also has good interpretability in the reasoning process. © 2021, Science Press. All right reserved.
引用
收藏
页码:2573 / 2584
页数:11
相关论文
共 46 条
  • [1] Liu Huan, Zheng Qinghua, Luo Minnan, Et al., Cross-domain adversarial learning for zero-shot classification, Journal of Computer Research and Development, 56, 12, pp. 2521-2535, (2019)
  • [2] Lake B M, Salakhutdinov R, Tenenbaum J B., Human-level concept learning through probabilistic program induction, Science, 350, 6266, pp. 1332-1338, (2015)
  • [3] Li Feifei, A Bayesian approach to unsupervised one-shot learning of object categories, Proc of IEEE Int Conf on Computer Vision, pp. 1134-1141, (2003)
  • [4] Li Feifei, Fergus R, Perona P., One-shot learning of object categories, IEEE Transactions on Pattern Analysis and Machine Intelligence, 28, 4, pp. 594-611, (2006)
  • [5] Wang Yaqiang, Yao Quanming, Kwok J T, Et al., Generalizing from a few examples: A survey on few-shot learning, ACM Computing Surveys, 53, 3, pp. 1-34, (2020)
  • [6] Alfassy A, Karlinsky L, Aides A, Et al., Laso: Label-set operations networks for multi-label few-shot learning, Proc of the IEEE Conf on Computer Vision and Pattern Recognition, pp. 6548-6557, (2019)
  • [7] Vinyals O, Blundell C, Lillicrap T, Et al., Matching networks for one shot learning, Proc of the Advances in Neural Information Processing Systems, pp. 3630-3638, (2016)
  • [8] Finn C, Abbeel P, Levine S., Model-agnostic meta-learning for fast adaptation of deep networks, Proc of the Int Conf on Machine Learning, pp. 1126-1135, (2017)
  • [9] Nichol A, Schulman J., Reptile: a scalable metalearning algorithm[J], (2018)
  • [10] Edraki M, Qi Guojun, Generalized loss-sensitive adversarial learning with manifold margins, Proc of the European Conf on Computer Vision, pp. 87-102, (2018)