Hybrid compression for LSTM-based encrypted traffic classification model

被引:0
作者
Mu Q. [1 ]
Zhang M. [1 ]
机构
[1] College of Computer Science and Technology, Jilin University, Jilin, Changchun
关键词
deep learning; encrypted traffic classification; filter pruning; knowledge distillation;
D O I
10.1504/IJWMC.2024.136587
中图分类号
学科分类号
摘要
Traditional techniques for network traffic classification are no longer effective in handling the complexities of dynamic network environments. Moreover, deep learning methods, while powerful, demand substantial spatial and computational resources, resulting in increased latency and instability. In this paper, we propose an innovative approach to network traffic classification utilising an LSTM structure. This approach incorporates network pruning, knowledge refinement, and Generative Adversarial Networks (GAN) to reduce model size, accelerate training speed without compromising accuracy, and address challenges associated with unbalanced datasets in classification problems. Our methodology involves the pruning of unimportant filters from the teacher model, followed by retraining and knowledge distillation to generate the student model. Experimental show that the size of the pruned teacher model is only 25.69% of the original, resulting in a noteworthy 28.16% improvement in training speed. Additionally, the classification performance of various unbalanced traffic categories, such as VoIP and streaming, shows significant enhancement. © 2024 Inderscience Publishers. All rights reserved.
引用
收藏
页码:61 / 73
页数:12
相关论文
共 28 条
[1]  
Akbari I., Salahuddin M.A., Ven L., Limam N., Boutaba R., Mathieu B., Moteau S., Et al., Traffic classification in an increasingly encrypted web, Communications of the ACM, 65, 10, pp. 75-83, (2022)
[2]  
Chen A-T., Liu P., Hong D-Y., Wu J-J., Accelerate CNN models via filter pruning and sparse tensor core, Proceedings of the 10th International Symposium on Computing and Networking (CANDAR), pp. 1-9, (2021)
[3]  
Gil D., Lashkari A.H., Mamun M., Ghorbani A.A., Characterization of encrypted and vpn traffic using time-related, Proceedings of the 2nd international conference on information systems security and privacy (ICISSP), pp. 407-414, (2016)
[4]  
Gu X., Tian H., Dai Z., Structured attention knowledge distillation for lightweight networks, Proceedings of the 33rd Chinese Control and Decision Conference (CCDC), pp. 1726-1730, (2021)
[5]  
Guo C.Y., Li P., Hybrid pruning for convolutional neural network convolution kernel, Proceedings of the 4th International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE), pp. 432-438, (2021)
[6]  
Guo Y., Xiong G., Li Z., Shi J., Cui M., Gou G., TA-GAN: GAN based traffic augmentation for imbalanced network traffic classification, International Joint Conference on Neural Networks (IJCNN), pp. 1-8, (2021)
[7]  
He Y.J., Li W., Image-based encrypted traffic classification with convolution neural networks, IEEE Fifth International Conference on Data Science in Cyberspace (DSC), pp. 271-278, (2020)
[8]  
Hinton G., Vinyals O., Dean J., Distilling the knowledge in a neural Network, (2015)
[9]  
Li H., Kadav A., Durdanovic I., Samet H., Graf H.P., Pruning Filters for Efficient ConvNets, (2017)
[10]  
Li S., Lin M., Wang Y., Wu Y., Tian Y., Shao L., Ji R., Distilling a powerful student model via online knowledge distillation, IEEE Transactions on Neural Networks and Learning Systems, pp. 1-10, (2022)