An Instance-based Transfer Learning Approach, Applied to Intrusion Detection

被引:0
作者
Kawish, Sonia [1 ]
Louafi, Habib [2 ]
Yao, Yiyu [1 ]
机构
[1] Univ Regina, Dept Comp Sci, Regina, SK, Canada
[2] TELUQ Univ, Dept Sci & Technol, Montreal, PQ, Canada
来源
2023 20TH ANNUAL INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY AND TRUST, PST | 2023年
关键词
Intrusion detection system; transfer learning; machine learning; instance-based; TrAdaBoost; maximum mean discrepancy; NETWORK;
D O I
10.1109/PST58708.2023.10319986
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To detect malicious activities in the network, Intrusion Detection Systems (IDS) are deployed. One way to build IDS is through Machine Learning (ML) techniques. Using ML techniques for building IDS models has a few shortcomings. The performance of these models is affected when new attacks emerge, such as zero-day attacks. That is because the traditional techniques assume that training and testing data come from the same distribution, therefore, when new attacks emerge, the underlying distribution changes also and affects the performance of the model. Also, the attack samples of new attacks may be scarce. In this paper, we present a solution to train an IDS model, where scarce data is available, using an instance-based Transfer Learning (TL) approach. This approach allows for increasing the sample size in the Target Domain by using similar instances from a related Source Domain. We conducted our experiments using the UNSW-NB15 dataset and the obtained results are appealing. Indeed, we obtained 92.5%, 88.4%, 86.5%, and 86.8%, using the widely used performance metrics Accuracy, Recall, Precision and F1-Score, respectively. These results are obtained even though the distribution difference between the Source and Target Domains is significantly higher, as measured with the Maximum Mean Discrepancy (MMD) metric.
引用
收藏
页码:187 / 193
页数:7
相关论文
共 29 条
  • [1] ACCS, Australian Centre for Cyber Security
  • [2] Anand P., 2022, Scientific Reports, V12, P1
  • [3] Learning Deep Architectures for AI
    Bengio, Yoshua
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2009, 2 (01): : 1 - 127
  • [4] Brownlee J., Why One-Hot Encode Data in Machine Learning?
  • [5] Building feature space of extreme learning machine with sparse denoising stacked-autoencoder
    Cao, Le-le
    Huang, Wen-bing
    Sun, Fu-chun
    [J]. NEUROCOMPUTING, 2016, 174 : 60 - 71
  • [6] Unsupervised learning approach for network intrusion detection system using autoencoders
    Choi, Hyunseung
    Kim, Mintae
    Lee, Gyubok
    Kim, Wooju
    [J]. JOURNAL OF SUPERCOMPUTING, 2019, 75 (09) : 5597 - 5621
  • [7] de Mathelin A, 2023, Arxiv, DOI arXiv:2107.03049
  • [8] Gretton A, 2012, J MACH LEARN RES, V13, P723
  • [9] Reducing the dimensionality of data with neural networks
    Hinton, G. E.
    Salakhutdinov, R. R.
    [J]. SCIENCE, 2006, 313 (5786) : 504 - 507
  • [10] Larochelle H., 2006, P ADV NEUR INF PROC, P153