Reliable Localized On-line Learning in Non-stationary Environments

被引:0
作者
Buschermoehle, Andreas [1 ]
Brockmann, Werner [1 ]
机构
[1] Univ Osnabruck, Smart Embedded Syst Grp, Osnabruck, Germany
来源
2014 IEEE CONFERENCE ON EVOLVING AND ADAPTIVE INTELLIGENT SYSTEMS (EAIS) | 2014年
关键词
PERCEPTRON; DESCENT; MODEL;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
On-line learning allows to adapt to changing non-stationary environments. But typically with on-line learning a hypothesis of the data relation is adapted based on a stream of single local training examples, continuously changing the global input-output relation. Hence with these single examples the whole hypothesis is revised incrementally, which might be harmful to the overall predictive quality of the learned model. Nevertheless, for a reliable adaptation, the learned model must yield good predictions in every step. Therefor, the IRMA approach to online learning enables an adaptation that reliably incorporates a new example with a stringent local, but minimal global influence on the input-output relation. The main contribution of this paper is twofold. First, it presents an extension of IRMA regarding the setup of the stiffness, i.e. its hyper-parameter. Second, the IRMA approach is investigated for the first time on a non-trivial real-world application in a non-stationary environment. It is compared with state of the art algorithms on predicting future electric loads in a power grid where a continuous adaptation is necessary to adapt to season and weather conditions. The results show that the performance is increased significantly by IRMA.
引用
收藏
页数:7
相关论文
共 28 条
  • [1] Electric load forecasting: literature survey and classification of methods
    Alfares, HK
    Nazeeruddin, M
    [J]. INTERNATIONAL JOURNAL OF SYSTEMS SCIENCE, 2002, 33 (01) : 23 - 34
  • [2] [Anonymous], 2012, P ICML ED UK
  • [3] [Anonymous], 1999, The Nature Statist. Learn. Theory
  • [4] [Anonymous], 2008, Proceedings of the 25th international conference on Machine learning, DOI DOI 10.1145/1390156.1390190
  • [5] 1ST-ORDER AND 2ND-ORDER METHODS FOR LEARNING - BETWEEN STEEPEST DESCENT AND NEWTON METHOD
    BATTITI, R
    [J]. NEURAL COMPUTATION, 1992, 4 (02) : 141 - 166
  • [6] PERCEPTRON - A MODEL FOR BRAIN FUNCTIONING .1.
    BLOCK, HD
    [J]. REVIEWS OF MODERN PHYSICS, 1962, 34 (01) : 123 - &
  • [7] Boser B. E., 1992, Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory, P144, DOI 10.1145/130385.130401
  • [8] Buschermoehle A., 2013, P WORKSH COMP INT
  • [9] Stable On-line Learning with Optimized Local Learning, but Minimal Change of the Global Output
    Buschermoehle, Andreas
    Brockmann, Werner
    [J]. 2013 12TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2013), VOL 2, 2013, : 21 - 27
  • [10] The Incremental Risk Functional: Basics of a Novel Incremental Learning Approach
    Buschermoehle, Andreas
    Schoenke, Jan
    Rosemann, Nils
    Brockmann, Werner
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2013), 2013, : 1500 - 1505