Bi-level multi-objective evolution of a Multi-Layered Echo-State Network Autoencoder for data representations

被引:19
作者
Chouikhi, Naima [1 ]
Ammar, Boudour [1 ]
Hussain, Amir [2 ]
Alimi, Adel M. [1 ]
机构
[1] Univ Sfax, Natl Engn Sch Sfax ENIS, REs Grp Intelligent Machines, REGIM Lab, BP 1173, Sfax 3038, Tunisia
[2] Edinburgh Napier Univ, Sch Comp, Merchiston Campus, Edinburgh EH10 5DT, Midlothian, Scotland
关键词
Multi-Layered Echo State Network; Autoencoder; Data representation; PSO; Multi-objective optimization; Architecture optimization; Weights optimization; EXTREME LEARNING-MACHINE; NEURAL-NETWORK; ALGORITHM; CLASSIFICATION; OPTIMIZATION; PARAMETERS;
D O I
10.1016/j.neucom.2019.03.012
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The Multi-Layered Echo-State Network (ML-ESN) is a recently developed, highly powerful type of recurrent neural network. It has succeeded in dealing with several non-linear benchmark problems. On account of its rich dynamics, ML-ESN is exploited in this paper, for the first time, as a recurrent Autoencoder (ML-ESNAE) to extract new features from original data representations. Further, the challenging and crucial task of optimally determining the ML-ESNAE architecture and training parameters is addressed, in order to extract more efficient features from the data. Traditionally, in a ML-ESN, the number of parameters (hidden neurons, sparsity rates, weights) are randomly chosen and manually altered to achieve a minimum learning error. On one hand, this random setting may not guarantee best generalization results. On the other, it can increase the network's complexity. In this paper, a novel bi-level evolutionary optimization approach is thus proposed for the ML-ESNAE, to deal with these challenges. The first level offers Pareto multi-objective architecture optimization, providing maximum learning accuracy while maintaining a reduced complexity target. Next, every Pareto optimal solution obtained from the first level undergoes a mono-objective weights optimization at the second level. Particle Swarm Optimization (PSO) is used as an evolutionary tool for both levels 1 and 2. An empirical study shows that the evolved ML-ESNAE produces a noticeable improvement in extracting new, more expressive data features from original ones. A number of application case studies, using a range of benchmark datasets, show that the extracted features produce excellent results in terms of classification accuracy. The effectiveness of the evolved ML-ESNAE is demonstrated for both noisy and noise-free data. In conclusion, the evolutionary ML-ESNAE is proposed as a new benchmark for the evolutionary AI and machine learning research community. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页码:195 / 211
页数:17
相关论文
共 65 条
[1]   Existence and Uniqueness of Pseudo Almost-Periodic Solutions of Recurrent Neural Networks with Time-Varying Coefficients and Mixed Delays [J].
Ammar, Boudour ;
Cherif, Farouk ;
Alimi, Adel M. .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2012, 23 (01) :109-118
[2]   Multi-agent architecture for Multi-objective optimization of Flexible Neural Tree [J].
Ammar, Marwa ;
Bouaziz, Souhir ;
Alimi, Adel M. ;
Abraham, Ajith .
NEUROCOMPUTING, 2016, 214 :307-316
[3]  
[Anonymous], FUTURE GENER COMPUT
[4]  
[Anonymous], 2015, 2015 INT JOINT C NEU
[5]  
[Anonymous], 2002, Technical Report
[6]  
[Anonymous], 2006, Stud Fuzziness Soft Comput
[7]  
Aruna S., 2011, CCSEA 2011,CS IT 02, P37, DOI DOI 10.5121/CSIT.2011.1205
[8]  
Bagnall A., 2012, P SIAM INT C DAT MIN
[9]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[10]   Investigating Echo-State Networks Dynamics by Means of Recurrence Analysis [J].
Bianchi, Filippo Maria ;
Livi, Lorenzo ;
Alippi, Cesare .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (02) :427-439