MPPCANet: A feedforward learning strategy for few-shot image classification

被引:13
作者
Song, Yu [1 ,2 ,3 ,4 ,5 ]
Chen, Changsheng [1 ,2 ,3 ,4 ,5 ]
机构
[1] Shenzhen Univ, Coll Elect & Informat Engn, Shenzhen, Peoples R China
[2] Shenzhen Univ, Shenzhen Key Lab Media Secur, Shenzhen, Peoples R China
[3] Guangdong Key Lab Intelligent Informat Proc, Shenzhen, Peoples R China
[4] Guangdong Lab Artificial Intelligence & Digital E, Shenzhen, Peoples R China
[5] Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
Feedforward learning; PCANet; Mixtures of probabilistic principal; component analysis;
D O I
10.1016/j.patcog.2020.107792
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The main learning strategy of the PCANet is using Principal Component Analysis (PCA) for learning the convolutional filters from the data. The assumption that all the image patches are sampled from a single Gaussian component is implicitly taken, which is too strong. In this paper, the image patches are modeled using mixtures of probabilistic principal component analysis (MPPCA) and the corresponding MPPCANet (PCANet constructed using mixtures of probabilistic principal component analysis) is proposed. The proposed model is applied to the few-shot learning scenario. In the proposed framework, the image patches are assumed to come from several suppositions of Gaussian components. In the process of estimating the parameters of the MPPCA model, the clustering of the training image patches and the principal components of each cluster are simultaneously obtained. The number of mixture components is automatically determined during the optimization procedure. The theoretical insights of the proposed MPPCANet is elaborated by comparing with our prior work, CPCANet (PCANet with clustering-based filters). The proposed MPPCANet is evaluated on several benchmarking visual data sets in the experiment. It is compared with the original PCANet, CPCANet and several state-of-the-art methods. The experimental results show that the proposed MPPCANet has improved significantly the recognition capability of the original PCANet under the few-shot learning scenario. The performance of the MPPCANet is also better than the CPCANet in most cases. (c) 2020 Elsevier Ltd. All rights reserved.
引用
收藏
页数:14
相关论文
共 24 条
[1]  
Bishop C. M., 2006, Pattern recognition and machine learning
[2]   Invariant Scattering Convolution Networks [J].
Bruna, Joan ;
Mallat, Stephane .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1872-1886
[3]   PCANet: A Simple Deep Learning Baseline for Image Classification? [J].
Chan, Tsung-Han ;
Jia, Kui ;
Gao, Shenghua ;
Lu, Jiwen ;
Zeng, Zinan ;
Ma, Yi .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (12) :5017-5032
[4]   Stacked Convolutional Denoising Auto-Encoders for Feature Representation [J].
Du, Bo ;
Xiong, Wei ;
Wu, Jia ;
Zhang, Lefei ;
Zhang, Liangpei ;
Tao, Dacheng .
IEEE TRANSACTIONS ON CYBERNETICS, 2017, 47 (04) :1017-1027
[5]   PCANet-II: When PCANet Meets the Second Order Pooling [J].
Fan, Chunxiao ;
Hong, Xiaopeng ;
Tian, Lei ;
Ming, Yue ;
Pietikainen, Matti ;
Zhao, Guoying .
IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2018, E101D (08) :2159-2162
[6]  
Fan RE, 2008, J MACH LEARN RES, V9, P1871
[7]   From few to many: Illumination cone models for face recognition under variable lighting and pose [J].
Georghiades, AS ;
Belhumeur, PN ;
Kriegman, DJ .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2001, 23 (06) :643-660
[8]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[9]   A fast learning algorithm for deep belief nets [J].
Hinton, Geoffrey E. ;
Osindero, Simon ;
Teh, Yee-Whye .
NEURAL COMPUTATION, 2006, 18 (07) :1527-1554
[10]   A Novel Shape-Based Character Segmentation Method for Devanagari Script [J].
Jindal, Khushneet ;
Kumar, Rajiv .
ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2017, 42 (08) :3221-3228