Exploiting Deep Neural Networks as Covert Channels

被引:1
作者
Pishbin, Hora Saadaat [1 ]
Bidgoly, Amir Jalaly [1 ]
机构
[1] Univ Qom, Dept Informat Technol & Comp Engn, Qom 3716146611, Iran
关键词
Data models; Computational modeling; Deep learning; Receivers; Training; Artificial neural networks; Malware; Trustworthy machine learning; deep neural network; covert channel; deep learning attack; concealment;
D O I
10.1109/TDSC.2023.3300072
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the increasing development of deep learning models, the security of these models has become more important. In this work, for the first time, we have investigated the possibility of abusing the deep model as a covert channel. The concept of a covert channel is to use a channel that is not designed for information exchange for transmitting a covert message. This work studies how a deep model can be used by an adversary as a covert channel. The proposed approach is using an end-to-end training deep model called the covert model to produce artificial data which includes some covert messages. This artificial data is the input of the deep model, which is aimed at being exploited as a covert channel, in such a way that the signal will be covered in the output of this model. To achieve indistinguishability of concealment, generative adversarial networks are used. The results show that it is possible to have a covert channel with an acceptable message transmission power in well-known deep models such as the ResNet and InceptionV3 models. Results of case studies indicate the signal-to-noise ratio (SNR) of 12.67, the bit error rate (BER) of 0.08, and the accuracy of the deep model used to hide the signal reaches 92%.
引用
收藏
页码:2115 / 2126
页数:12
相关论文
共 35 条
[1]  
Abadi M., 2016, arXiv, DOI DOI 10.48550/ARXIV.1610.06918
[2]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[3]  
Adi Y, 2018, PROCEEDINGS OF THE 27TH USENIX SECURITY SYMPOSIUM, P1615
[4]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[5]  
Brendan McMahan H., 2018, P INT C LEARN REPR, P3
[6]  
Carlini N, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P267
[7]  
Congzheng Song, 2020, ASIA CCS '20: Proceedings of the 15th ACM Asia Conference on Computer and Communications Security, P344, DOI 10.1145/3320269.3384731
[8]   Turning Federated Learning Systems Into Covert Channels [J].
Costa, Gabriele ;
Pinelli, Fabio ;
Soderi, Simone ;
Tolomei, Gabriele .
IEEE ACCESS, 2022, 10 :130642-130656
[9]   Generative Adversarial Networks An overview [J].
Creswell, Antonia ;
White, Tom ;
Dumoulin, Vincent ;
Arulkumaran, Kai ;
Sengupta, Biswa ;
Bharath, Anil A. .
IEEE SIGNAL PROCESSING MAGAZINE, 2018, 35 (01) :53-65
[10]  
Cantareira GD, 2021, Arxiv, DOI arXiv:2103.10229