Data splitting for artificial neural networks using SOM-based stratified sampling

被引:168
作者
May, R. J. [1 ]
Maier, H. R. [2 ]
Dandy, G. C. [2 ]
机构
[1] United Water, Res & Dev, Adelaide, SA 5001, Australia
[2] Univ Adelaide, Sch Civil Environm & Mining Engn, Adelaide, SA 5005, Australia
关键词
Artificial neural networks; Data splitting; Cross-validation; Self-organizing maps; Stratified sampling; VALIDATION; PREDICTION; SELECTION; VARIANCE; MODELS; BIAS;
D O I
10.1016/j.neunet.2009.11.009
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the Subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. (C) 2009 Elsevier Ltd. All rights reserved.
引用
收藏
页码:283 / 294
页数:12
相关论文
共 37 条
  • [1] Generalisation for neural networks through data sampling and training procedures, with applications to streamflow predictions
    Anctil, F
    Lauzon, N
    [J]. HYDROLOGY AND EARTH SYSTEM SCIENCES, 2004, 8 (05) : 940 - 958
  • [2] [Anonymous], J COMPUT CIVIL ENG, DOI DOI 10.1061/(ASCE)0887-3801(2004)18:2(105)
  • [3] [Anonymous], 2001, SPRINGER SERIES INFO, DOI DOI 10.1007/978-3-642-56927-2
  • [4] Baxter CW, 2000, 6 ENV ENG SOC SPEC C, P376
  • [5] Forecasting chlorine residuals in a water distribution system using a general regression neural network
    Bowden, Gavin J.
    Nixon, John B.
    Dandy, Graerne C.
    Maier, Holger R.
    Holmes, Mike
    [J]. MATHEMATICAL AND COMPUTER MODELLING, 2006, 44 (5-6) : 469 - 484
  • [6] Input determination for neural network models in water resources applications. Part 1 - background and methodology
    Bowden, GJ
    Dandy, GC
    Maier, HR
    [J]. JOURNAL OF HYDROLOGY, 2005, 301 (1-4) : 75 - 92
  • [7] Optimal division of data for neural network models in water resources applications
    Bowden, GJ
    Maier, HR
    Dandy, GC
    [J]. WATER RESOURCES RESEARCH, 2002, 38 (02) : 2 - 1
  • [8] Cochran W.G., 2007, Sampling techniques
  • [9] Representative subset selection
    Daszykowski, M
    Walczak, B
    Massart, DL
    [J]. ANALYTICA CHIMICA ACTA, 2002, 468 (01) : 91 - 103
  • [10] Statistical tools to assess the reliability of self-organizing maps
    de Bodt, E
    Cottrell, M
    Verleysen, M
    [J]. NEURAL NETWORKS, 2002, 15 (8-9) : 967 - 978