Online Active Learning in Data Stream Regression Using Uncertainty Sampling Based on Evolving Generalized Fuzzy Models

被引:86
作者
Lughofer, Edwin [1 ]
Pratama, Mahardhika [2 ]
机构
[1] Johannes Kepler Univ Linz, Dept Knowledge Based Math Syst, A-4040 Linz, Austria
[2] La Trobe Univ, Sch Engn & Math Sci, Bundoora, Vic 3086, Australia
关键词
Active learning latency buffer (ALLB); data stream regression; evolving generalized Takagi-Sugeno (TS) fuzzy systems; extrapolation degree; nonlinearity degree; online active learning; single-pass uncertainty-based sampling; uncertainty in model outputs and parameters; VISUAL INSPECTION; SYSTEMS; IDENTIFICATION; NETWORK; DESIGN;
D O I
10.1109/TFUZZ.2017.2654504
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose three criteria for efficient sample selection in case of data stream regression problems within an online active learning context. The selection becomes important whenever the target values, which guide the update of the regressors as well as the implicit model structures, are costly or time-consuming to measure and also in case when very fast models updates are required to cope with stream mining real-time demands. Reducing the selected samples as much as possible while keeping the predictive accuracy of the models on a high level is, thus, a central challenge. This should be ideally achieved in unsupervised and single-pass manner. Our selection criteria rely on three aspects: 1) the extrapolation degree combined with the model's nonlinearity degree, which is measured in terms of a new specific homogeneity criterion among adjacent local approximators; 2) the uncertainty in model outputs, which can be measured in terms of confidence intervals using so-called adaptive local error bars-we integrate a weighted localization of an incremental noise level estimator and propose formulas for online merging of local error bars; 3) the uncertainty in model parameters, which is estimated by the so-called A-optimality criterion, which relies on the Fisher information matrix. The selection criteria are developed in combination with evolving generalized Takagi-Sugeno (TS) fuzzy models (containing rules in arbitrarily rotated position), as it could be shown in previous publications that these outperform conventional evolving TS models (containing axis-parallel rules). The results based on three high-dimensional real-world streaming problems show that a model update based on only 10%-20% selected samples can still achieve similar accumulated model errors over time to the case when performing a full model update on all samples. This can be achieved with a negligible sensitivity on the size of the active learning latency buffer. Random sampling with the same percentages of samples selected, however, achieved much higher error rates. Hence, the intelligence in our sample selection concept leads to an economic balance between model accuracy and measurement as well computational costs for model updates.
引用
收藏
页码:292 / 309
页数:18
相关论文
共 93 条
[1]   Memory retention - the synaptic stability versus plasticity dilemma [J].
Abraham, WC ;
Robins, A .
TRENDS IN NEUROSCIENCES, 2005, 28 (02) :73-78
[2]  
Almaksour A., EVOLVING SYST, V2, P25
[3]  
Angelov P., 2010, EVOLVING INTELLIGENT, V12, P21
[4]  
Angelov PP, 2015, SPRINGER HANDBOOK OF COMPUTATIONAL INTELLIGENCE, P1435
[5]   An approach to Online identification of Takagi-Suigeno fuzzy models [J].
Angelov, PP ;
Filev, DP .
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2004, 34 (01) :484-498
[6]  
[Anonymous], IEEE SMC NEWSLETT
[7]  
[Anonymous], 2012, Autonomous learning systems: from data streams to knowledge in real-time
[8]  
[Anonymous], 1943, Bulletin of the Calcultta Mathematical Society, DOI DOI 10.1038/157869B0
[9]  
[Anonymous], 2012, Learning in Non-Stationary Environments: Methods and Applications
[10]   Evolving classification of agents' behaviors: a general approach [J].
Antonio Iglesias, Jose ;
Angelov, Plamen ;
Ledezma, Agapito ;
Sanchis, Araceli .
EVOLVING SYSTEMS, 2010, 1 (03) :161-171