Non-parametric data selection for neural learning in non-stationary time series

被引:13
作者
Deco, G
Neuneier, R
Schurmann, B
机构
[1] Siemens AG, R and D, 81739 Munich
关键词
D O I
10.1016/S0893-6080(96)00108-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
When a parametric model of a non-stationary time series is constructed based on data, as is the case for neural networks, it is very important to train the model with a set of data which contains the underlying structure to be discovered. The regions where only a noisy behavior is observed should be ignored. Information about the predictability can be used for example to select regions where a temporal structure is visible in order to select data for training a neural network for prediction. In this paper, we present a non-parametric cumulant based statistical approach for detecting linear and nonlinear statistical dependences in non-stationary time series. The statistical dependence is detected by measuring the predictability, which tests the null hypothesis of statistical independence, expressed in Fourier-space, by the surrogate method. Therefore, the predictability is defined as a higher-order cumulant based significance, discriminating between the original data and a set of scrambled surrogate data which correspond to the null hypothesis of a non-causal relationship between past and present. In this formulation nonlinear and non-Gaussian temporal dependences can be detected in time series. We apply the herein presented data selection method to the task of predicting the daily relative differences of the DAX given 12 inputs. These input variables describe a so called technical model using only historical DAX prices and the total volume of the transactions at the stack market. (C) 1997 Elsevier Science Ltd.
引用
收藏
页码:401 / 407
页数:7
相关论文
共 17 条
[1]  
[Anonymous], NEURAL NETWORKS CAPI
[2]   STATISTICAL-ENSEMBLE THEORY OF REDUNDANCY REDUCTION AND THE DUALITY BETWEEN UNSUPERVISED AND SUPERVISED NEURAL LEARNING [J].
DECO, G ;
SCHURMANN, B .
PHYSICAL REVIEW E, 1995, 52 (06) :6580-6587
[3]   LEARNING TIME-SERIES EVOLUTION BY UNSUPERVISED EXTRACTION OF CORRELATIONS [J].
DECO, G ;
SCHURMANN, B .
PHYSICAL REVIEW E, 1995, 51 (03) :1780-1790
[4]  
DECO G, 1996, INFORMATION THEORETI
[5]   IMPROVING MODEL SELECTION BY NONCONVERGENT METHODS [J].
FINNOFF, W ;
HERGERT, F ;
ZIMMERMANN, HG .
NEURAL NETWORKS, 1993, 6 (06) :771-783
[6]  
Gardiner C.W., 1990, HDB STOCHASTIC METHO
[7]   FINITE-SAMPLE CORRECTIONS TO ENTROPY AND DIMENSION ESTIMATES [J].
GRASSBERGER, P .
PHYSICS LETTERS A, 1988, 128 (6-7) :369-373
[8]   2-DIMENSIONAL MAPPING WITH A STRANGE ATTRACTOR [J].
HENON, M .
COMMUNICATIONS IN MATHEMATICAL PHYSICS, 1976, 50 (01) :69-77
[9]   FINITE-SAMPLE EFFECTS IN SEQUENCE-ANALYSIS [J].
HERZEL, H ;
SCHMITT, AO ;
EBELING, W .
CHAOS SOLITONS & FRACTALS, 1994, 4 (01) :97-113
[10]   Fast Learning in Networks of Locally-Tuned Processing Units [J].
Moody, John ;
Darken, Christian J. .
NEURAL COMPUTATION, 1989, 1 (02) :281-294