State Preserving Extreme Learning Machine for Face Recognition

被引:0
作者
Alom, Md. Zahangir [1 ]
Sidike, Paheding [1 ]
Asari, Vijayan K. [1 ]
Taha, Tarek M. [1 ]
机构
[1] Univ Dayton, Dept Elect & Comp Engn, Dayton, OH 45469 USA
来源
2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2015年
关键词
Extreme learning machine; weight adaptive; neural network; feature extraction; face recognition; CLASSIFICATION; REGRESSION; NETWORKS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Extreme Learning Machine (ELM) has been introduced as a new algorithm for training single hidden layer feed-forward neural networks (SLFNs) instead of the classical gradient-based algorithms. Based on the consistency property of data, which enforce similar samples to share similar properties, ELM is a biologically inspired learning algorithm with SLFNs that learns much faster with good generalization and performs well in classification applications. However, the random generation of the weight matrix in current ELM based techniques leads to the possibility of unstable outputs in the learning and testing phases. Therefore, we present a novel approach for computing the weight matrix in ELM which forms a State Preserving Extreme Leaning Machine (SPELM). The SPELM stabilizes ELM training and testing outputs while monotonically increases its accuracy by preserving state variables. Furthermore, three popular feature extraction techniques, namely Gabor, Pyramid Histogram of Oriented Gradients (PHOG) and Local Binary Pattern (LBP) are incorporated with the SPELM for performance evaluation. Experimental results show that our proposed algorithm yields the best performance on the widely used face datasets such as Yale, eMU and ORL compared to state-of-the-art ELM based classifiers.
引用
收藏
页数:7
相关论文
共 27 条
[1]  
Ahonen T., 2006, IEEE TPAMI, V28
[2]  
[Anonymous], 2007, P 6 ACM INT C IM VID, DOI [DOI 10.1145/1282280.1282340, 10.1145/1282280.1282340]
[3]  
Barros ALBP, 2013, LECT NOTES COMPUT SC, V8073, P588, DOI 10.1007/978-3-642-40846-5_59
[4]   The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network [J].
Bartlett, PL .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1998, 44 (02) :525-536
[5]   Regularized Extreme Learning Machine [J].
Deng, Wanyu ;
Zheng, Qinghua ;
Chen, Lin .
2009 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DATA MINING, 2009, :389-395
[6]   Error Minimized Extreme Learning Machine With Growth of Hidden Nodes and Incremental Learning [J].
Feng, Guorui ;
Huang, Guang-Bin ;
Lin, Qingping ;
Gay, Robert .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (08) :1352-1357
[7]   Robust extreme learning machine [J].
Horata, Punyaphol ;
Chiewchanwattana, Sirapat ;
Sunat, Khamron .
NEUROCOMPUTING, 2013, 102 :31-44
[8]   Extreme learning machine: Theory and applications [J].
Huang, Guang-Bin ;
Zhu, Qin-Yu ;
Siew, Chee-Kheong .
NEUROCOMPUTING, 2006, 70 (1-3) :489-501
[9]   Universal approximation using incremental constructive feedforward networks with random hidden nodes [J].
Huang, Guang-Bin ;
Chen, Lei ;
Siew, Chee-Kheong .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (04) :879-892
[10]   Extreme Learning Machine for Regression and Multiclass Classification [J].
Huang, Guang-Bin ;
Zhou, Hongming ;
Ding, Xiaojian ;
Zhang, Rui .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2012, 42 (02) :513-529