共 31 条
A Robust Deep-Learning-Enabled Trust-Boundary Protection for Adversarial Industrial IoT Environment
被引:39
作者:
Hassan, Mohammad Mehedi
[1
,2
]
Hassan, Md Rafiul
[3
]
Huda, Shamsul
[4
]
de Albuquerque, Victor Hugo C.
[5
]
机构:
[1] King Saud Univ, Coll Comp & Informat Sci, Riyadh 11543, Saudi Arabia
[2] King Saud Univ, Res Chair Smart Technol, Riyadh 11543, Saudi Arabia
[3] Univ Maine Presque Isle, Coll Arts & Sci, Presque Isle, ME 04769 USA
[4] Deakin Univ, Sch Informat Technol, Burwood, Vic 3125, Australia
[5] Univ Fortaleza, Dept Comp Sci, BR-60811905 Fortaleza, Ceara, Brazil
来源:
IEEE INTERNET OF THINGS JOURNAL
|
2021年
/
8卷
/
12期
关键词:
Training;
Generative adversarial networks;
Robustness;
Gallium nitride;
Data models;
Internet of Things;
Machine learning;
Adversarial attack;
deep learning (DL);
Industrial Internet of Things (IIoT);
robustness;
trust boundary protection;
CYBER-PHYSICAL SYSTEMS;
ATTACKS;
MODEL;
NETWORKS;
MALWARE;
D O I:
10.1109/JIOT.2020.3019225
中图分类号:
TP [自动化技术、计算机技术];
学科分类号:
0812 ;
摘要:
In recent years, trust-boundary protection has become a challenging problem in Industrial Internet of Things (IIoT) environments. Trust boundaries separate IIoT processes and data stores in different groups based on user access privilege. Points where dataflow intersects with the trust boundary are becoming entry points for attackers. Attackers use various model skewing and intelligent techniques to generate adversarial/noisy examples that are indistinguishable from natural data. Many of the existing machine-learning (ML)-based approaches attempt to circumvent this problem. However, owing to an extremely large attack surface in the IIoT network, capturing a true distribution during training is difficult. The standard generative adversarial network (GAN) commonly generates adversarial examples for training using randomly sampled noise. However, the distribution of noisy inputs of GAN largely differs from actual distribution of data in IIoT networks and shows less robustness against adversarial attacks. Therefore, in this article, we propose a downsampler-encoder-based cooperative data generator that is trained using an algorithm to ensure better capture of the actual distribution of attack models for the large IIoT attack surface. The proposed downsampler-based data generator is alternatively updated and verified during training using a deep neural network discriminator to ensure robustness. This guarantees the performance of the generator against input sets with a high noise level at time of training and testing. Various experiments are conducted on a real IIoT testbed data set. Experimental results show that the proposed approach outperforms conventional deep learning and other ML techniques in terms of robustness against adversarial/noisy examples in the IIoT environment.
引用
收藏
页码:9611 / 9621
页数:11
相关论文