Label propagation via local geometry preserving for deep semi-supervised image recognition

被引:5
作者
Qing, Yuanyuan [1 ]
Zeng, Yijie [1 ]
Huang, Guang-Bin [1 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
关键词
Semisupervised learning; Label propagation; Image classification;
D O I
10.1016/j.neunet.2021.06.007
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a novel transductive pseudo-labeling based method for deep semi-supervised image recognition. Inspired from the superiority of pseudo labels inferred by label propagation compared with those inferred from network, we argue that information flow from labeled data to unlabeled data should be kept noiseless and with minimum loss. Previous research works use scarce labeled data for feature learning and solely consider the relationship between two feature vectors to construct the similarity graph in feature space, which causes two problems that ultimately lead to noisy and incomplete information flow from labeled data to unlabeled data. The first problem is that the learned feature mapping is highly likely to be biased and can easily over-fit noise. The second problem is the loss of local geometry information in feature space during label propagation. Accordingly, we firstly propose to incorporate self-supervised learning into feature learning for cleaner information flow in feature space during subsequent label propagation. Secondly, we propose to use reconstruction concept to measure pairwise similarity in feature space, such that local geometry information can be preserved. Ablation study confirms synergistic effects from features learned with self-supervision and similarity graph with local geometry preserving. Extensive experiments conducted on benchmark datasets have verified the effectiveness of our proposed method. (C) 2021 Elsevier Ltd. All rights reserved.
引用
收藏
页码:303 / 313
页数:11
相关论文
共 35 条
  • [1] Andersen M., 2020, CVXOPT PYTHON PACKAG
  • [2] [Anonymous], 2017, ARXIV170703631
  • [3] Athiwaratkun B., 2019, ICLR
  • [4] Belkin M, 2006, J MACH LEARN RES, V7, P2399
  • [5] Berthelot D, 2019, ADV NEUR IN, V32
  • [6] Chapelle O., 2009, SEMISUPERVISED LEARN, V20, P542, DOI [10.1109/TNN.2009.2015974, DOI 10.1109/TNN.2009.2015974]
  • [7] AutoAugment: Learning Augmentation Strategies from Data
    Cubuk, Ekin D.
    Zoph, Barret
    Mane, Dandelion
    Vasudevan, Vijay
    Le, Quoc V.
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 113 - 123
  • [8] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [9] Dual Attention Network for Scene Segmentation
    Fu, Jun
    Liu, Jing
    Tian, Haijie
    Li, Yong
    Bao, Yongjun
    Fang, Zhiwei
    Lu, Hanqing
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 3141 - 3149
  • [10] Gidaris S., 2018, ARXIV180307728CS