Forecast evaluation for data scientists: common pitfalls and best practices

被引:61
作者
Hewamalage, Hansika [1 ]
Ackermann, Klaus [2 ,3 ]
Bergmeir, Christoph [4 ]
机构
[1] Univ New South Wales, Sch Comp Sci & Engn, Sydney, Australia
[2] Monash Univ, Monash Business Sch, SoDa Labs, Melbourne, Australia
[3] Monash Univ, Monash Business Sch, Dept Econometr & Business Stat, Melbourne, Australia
[4] Monash Univ, Fac IT, Dept Data Sci & AI, Melbourne, Australia
基金
澳大利亚研究理事会;
关键词
Time series forecasting; Forecast evaluation; SERIES; ACCURACY; TESTS;
D O I
10.1007/s10618-022-00894-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent trends in the Machine Learning (ML) and in particular Deep Learning (DL) domains have demonstrated that with the availability of massive amounts of time series, ML and DL techniques are competitive in time series forecasting. Nevertheless, the different forms of non-stationarities associated with time series challenge the capabilities of data-driven ML models. Furthermore, due to the domain of forecasting being fostered mainly by statisticians and econometricians over the years, the concepts related to forecast evaluation are not the mainstream knowledge among ML researchers. We demonstrate in our work that as a consequence, ML researchers oftentimes adopt flawed evaluation practices which results in spurious conclusions suggesting methods that are not competitive in reality to be seemingly competitive. Therefore, in this work we provide a tutorial-like compilation of the details associated with forecast evaluation. This way, we intend to impart the information associated with forecast evaluation to fit the context of ML, as means of bridging the knowledge gap between traditional methods of forecasting and adopting current state-of-the-art ML techniques.We elaborate the details of the different problematic characteristics of time series such as non-normality and non-stationarities and how they are associated with common pitfalls in forecast evaluation. Best practices in forecast evaluation are outlined with respect to the different steps such as data partitioning, error calculation, statistical testing, and others. Further guidelines are also provided along selecting valid and suitable error measures depending on the specific characteristics of the dataset at hand.
引用
收藏
页码:788 / 832
页数:45
相关论文
共 100 条
[1]  
[Anonymous], 1977, The theory of stochastic processes
[2]   COMPARATIVE STUDY OF METHODS FOR LONG-RANGE MARKET FORECASTING [J].
ARMSTRONG, JS ;
GROHMAN, MC .
MANAGEMENT SCIENCE SERIES B-APPLICATION, 1972, 19 (02) :211-221
[3]  
Armstrong JS, 2001, INT SER OPER RES MAN, V30, P417
[4]  
Arnott R, 2019, J FINANC DATA SCI
[5]   The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances [J].
Bagnall, Anthony ;
Lines, Jason ;
Bostrom, Aaron ;
Large, James ;
Keogh, Eamonn .
DATA MINING AND KNOWLEDGE DISCOVERY, 2017, 31 (03) :606-660
[6]  
Bell F, 2018, FORECASTING INTRO
[7]   What drives volatility persistence in the foreign exchange market? [J].
Berger, David ;
Chaboud, Alain ;
Hjalmarsson, Erik .
JOURNAL OF FINANCIAL ECONOMICS, 2009, 94 (02) :192-213
[8]   A note on the validity of cross-validation for evaluating autoregressive time series prediction [J].
Bergmeir, Christoph ;
Hyndman, Rob J. ;
Koo, Bonsoo .
COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2018, 120 :70-83
[9]   A decision support system methodology for forecasting of time series based on soft computing [J].
Bermudez, J. D. ;
Segura, J. V. ;
Vercher, E. .
COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2006, 51 (01) :177-191
[10]   Kaggle forecasting competitions: An overlooked learning opportunity [J].
Bojer, Casper Solheim ;
Meldgaard, Jens Peder .
INTERNATIONAL JOURNAL OF FORECASTING, 2021, 37 (02) :587-603