Incremental evolution of trainable neural networks that are backwards compatible

被引:0
作者
Christenson, C [1 ]
Kaikhah, K
机构
[1] SW Texas Jr Coll, Dept Comp Sci, Uvalde, TX 78802 USA
[2] Texas State Univ, Dept Comp Sci, San Marcos, TX 78666 USA
来源
PROCEEDINGS OF THE IASTED INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND APPLICATIONS | 2006年
关键词
incremental evolution; neural networks; training; backwards compatible;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Supervised learning has long been used to modify the artificial neural network in order to perform classification tasks. However, the standard fully-connected layered design is often inadequate when performing such tasks. We demonstrate that evolution can be used to design an artificial neural network that learns faster and more accurately. By evolving artificial neural networks within a dynamic environment, the artificial neural network is forced to use learning. This strategy combined with incremental evolution produces an artificial neural network that outperforms the standard fully-connected layered design. The resulting artificial neural network can learn to perform an entire domain of tasks, including those of reduced complexity. Evolution alone can be used to create a network that performs a single task. However, real world environments are dynamic and thus require the ability to adapt to changes.
引用
收藏
页码:222 / +
页数:2
相关论文
共 7 条
[1]  
ANIL K, 2004, WSEAS T SYSTEMS, V3, P950
[2]  
Baldwin JM, 1896, AM NAT, V30, P441, DOI [DOI 10.1086/276408, 10.1086/276408]
[3]  
Boers EJW, 1995, ARTIFICIAL NEURAL NETS AND GENETIC ALGORITHMS, P333
[4]  
Caruana R, 2001, ADV NEUR IN, V13, P402
[5]   Evolving neural networks through augmenting topologies [J].
Stanley, KO ;
Miikkulainen, R .
EVOLUTIONARY COMPUTATION, 2002, 10 (02) :99-127
[6]  
STERELNY K, 2004, EVOLUTION LEARNING B, P341
[7]  
Turney P, 1996, P WORKSH EV COMP MAC, P135