Investigating the Added Value of Combining Regression Results from Different Window Lengths

被引:1
作者
Kerscher, Stefan [1 ]
Ludwig, Bernd [1 ]
Mueller, Nikolaus [2 ]
机构
[1] Univ Regensburg, Inst Informat & Media, Language & Culture, Regensburg, Germany
[2] Deggendorf Inst Technol, Dept Elect Media & Comp Engn, Deggendorf, Germany
来源
2019 IEEE SECOND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE) | 2019年
关键词
Regression; Neural Network; Time Series; Data Science; PCA; Feature Selection;
D O I
10.1109/AIKE.2019.00032
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Predicting the motion of a dynamic object by using time series is highly dependent on the chosen window size. If we use a too short window length the prediction gets noisy and is strongly affected by measurement errors. Too large windows lead to an inert behavior. In this paper we want to analyze, if there is an added value of using regression results coming from different window sizes instead of one, for classifying the performed maneuver. For our investigation, we are using motion data from pedestrians, taken by a laser. Those trajectories are the basis for our regression predictions. From those predictions, we are calculating features, based on one regression result with one window size, or based on two results with different window sizes. In the next step, we analyze which features are more significant for maneuver detection. The evaluation is done with feature selection methods like Principal Component Analysis or Extremely Randomized Trees. Additionally, we are training Neural Networks on features sets, stemming from regression results of one, ore two window sizes. On the outcome of those tests we can estimate the added value of two different regression window length. If features, which are using information from short- and longtime regression, are ranked better than features which are only based on one regression output we can conclude that there is an added value by using different window sizes.
引用
收藏
页码:128 / 135
页数:8
相关论文
共 14 条
[1]   Rank-Based Inverse Normal Transformations are Increasingly Used, But are They Merited? [J].
Beasley, T. Mark ;
Erickson, Stephen ;
Allison, David B. .
BEHAVIOR GENETICS, 2009, 39 (05) :580-595
[2]  
Becker S., 2018, CORR
[3]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[4]  
Dozat T., 2016, Incorporating Nesterov Momentum into Adam
[5]  
Ellis David, 2009, 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, P1229, DOI 10.1109/ICCVW.2009.5457470
[6]   An introduction to ROC analysis [J].
Fawcett, Tom .
PATTERN RECOGNITION LETTERS, 2006, 27 (08) :861-874
[7]   Extremely randomized trees [J].
Geurts, P ;
Ernst, D ;
Wehenkel, L .
MACHINE LEARNING, 2006, 63 (01) :3-42
[8]   MX-LSTM: mixing tracklets and vislets to jointly forecast trajectories and head poses [J].
Hasan, Irtiza ;
Setti, Francesco ;
Tsesmelis, Theodore ;
Del Bue, Alessio ;
Galasso, Fabio ;
Cristani, Marco .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6067-6076
[9]  
HYNDMAN R. J., 2018, FORECASTING PRINCIPL
[10]  
Pedregosa F, 2011, J MACH LEARN RES, V12, P2825