Applying Deep Learning Approaches for Network Traffic Prediction

被引:0
作者
Vinayakumar, R. [1 ]
Soman, K. P. [1 ]
Poornachandran, Prabaharan [2 ]
机构
[1] Amrita Univ, Amrita Vishwa Vidyapeetham, Amrita Sch Engn, Ctr Computat Engn & Networking CEN, Coimbatore, Tamil Nadu, India
[2] Amrita Univ, Amrita Vishwa Vidyapeetham, Amrita Sch Engn, Ctr Cyber Secur Syst & Networks, Amritapuri, India
来源
2017 INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTING, COMMUNICATIONS AND INFORMATICS (ICACCI) | 2017年
关键词
Network traffic matrix; Prediction; Deep Learning;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Network traffic prediction aims at predicting the subsequent network traffic by using the previous network traffic data. This can serve as a proactive approach for network management and planning tasks. The family of recurrent neural network (RNN) approaches is known for time series data modeling which aims to predict the future time series based on the past information with long time lags of unrevealed size. RNN contains different network architectures like simple RNN, long short term memory (LSTM), gated recurrent unit (GRU), identity recurrent unit (IRNN) which is capable to learn the temporal patterns and long range dependencies in large sequences of arbitrary length. To leverage the efficacy of RNN approaches towards traffic matrix estimation in large networks, we use various RNN networks. The performance of various RNN networks is evaluated on the real data from GEANT backbone networks. To identify the optimal network parameters and network structure of RNN, various experiments are done. All experiments are run up to 200 epochs with learning rate in the range [0.01-0.5]. LSTM has performed well in comparison to the other RNN and classical methods. Moreover, the performance of various RNN methods is comparable to LSTM.
引用
收藏
页码:2353 / 2358
页数:6
相关论文
共 27 条
[1]  
ABADI M, 2015, TENSORFLOW LARGE SCA, DOI DOI 10.48550/ARXIV.1605.08695
[2]   Evaluation of neural network architectures for MPEG-4 video traffic prediction [J].
Abdennour, Adel .
IEEE TRANSACTIONS ON BROADCASTING, 2006, 52 (02) :184-192
[3]  
[Anonymous], THESIS
[4]  
[Anonymous], 2006, NIPS
[5]  
[Anonymous], SOFTWARE TECHNOLOGY
[6]  
[Anonymous], 2004, Comput. Commun. Rev, V25, P202, DOI [DOI 10.1145/205447.205464, 10.1145/205447.205464]
[7]  
Barabas M., 2011, 2011 IEEE International Conference on Intelligent Computer Communication and Processing, P95, DOI 10.1109/ICCP.2011.6047849
[8]   LEARNING LONG-TERM DEPENDENCIES WITH GRADIENT DESCENT IS DIFFICULT [J].
BENGIO, Y ;
SIMARD, P ;
FRASCONI, P .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (02) :157-166
[9]  
Cho K, 2014, ARXIV14061078, P1724, DOI [DOI 10.3115/V1/D14-1179, 10.3115/V1/D14-1179]
[10]  
Cortez P, 2006, IEEE IJCNN, P2635