Parallel neural networks for improved nonlinear principal component analysis

被引:23
|
作者
Heo, Seongmin [1 ]
Lee, Jay H. [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Dept Chem & Biomol Engn, 291 Daehak Ro, Daejeon 34141, South Korea
关键词
Nonlinear principal component analysis; Parallel neural network; Autoassociative neural network; Autoencoder; Neural network training;
D O I
10.1016/j.compchemeng.2019.05.011
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In this paper, a parallel neural network architecture is proposed to improve the performance of neuralnetwork-based nonlinear principal component analysis. There exist two typical approaches for such analysis: simultaneous extraction of principal components using a single autoassociative neural network (also known as autoencoder), and sequential extraction using multiple neural networks in series. The proposed architecture can be obtained by systematically pruning the network connections of a fully connected autoassociative neural network, resulting in decoupled neural networks. As a result, more independent (i.e., less correlated) principal components can be obtained than the simultaneous extraction approach. The proposed architecture can be also viewed as a rearrangement of multiple neural networks for the sequential extraction in a parallel setting, and thus, the network training becomes more efficient. Simulation case studies are performed to illustrate the advantages of the proposed architecture, and it was shown that it is particularly beneficial for deep neural networks. (C) 2019 Elsevier Ltd. All rights reserved.
引用
收藏
页码:1 / 10
页数:10
相关论文
共 50 条