Exploiting Deep Neural Networks as Covert Channels

被引:1
作者
Pishbin, Hora Saadaat [1 ]
Bidgoly, Amir Jalaly [1 ]
机构
[1] Univ Qom, Dept Informat Technol & Comp Engn, Qom 3716146611, Iran
关键词
Data models; Computational modeling; Deep learning; Receivers; Training; Artificial neural networks; Malware; Trustworthy machine learning; deep neural network; covert channel; deep learning attack; concealment;
D O I
10.1109/TDSC.2023.3300072
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the increasing development of deep learning models, the security of these models has become more important. In this work, for the first time, we have investigated the possibility of abusing the deep model as a covert channel. The concept of a covert channel is to use a channel that is not designed for information exchange for transmitting a covert message. This work studies how a deep model can be used by an adversary as a covert channel. The proposed approach is using an end-to-end training deep model called the covert model to produce artificial data which includes some covert messages. This artificial data is the input of the deep model, which is aimed at being exploited as a covert channel, in such a way that the signal will be covered in the output of this model. To achieve indistinguishability of concealment, generative adversarial networks are used. The results show that it is possible to have a covert channel with an acceptable message transmission power in well-known deep models such as the ResNet and InceptionV3 models. Results of case studies indicate the signal-to-noise ratio (SNR) of 12.67, the bit error rate (BER) of 0.08, and the accuracy of the deep model used to hide the signal reaches 92%.
引用
收藏
页码:2115 / 2126
页数:12
相关论文
共 35 条
[11]   The Algorithmic Foundations of Differential Privacy [J].
Dwork, Cynthia ;
Roth, Aaron .
FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4) :211-406
[12]   Neural networks and deep learning: a brief introduction [J].
Georgevici, Adrian Iustin ;
Terblanche, Marius .
INTENSIVE CARE MEDICINE, 2019, 45 (05) :712-714
[13]  
Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1
[14]  
Goodfellow I.J., 2015, 2015 INT C LEARN REP
[15]  
Goodfellow I, 2017, Arxiv, DOI arXiv:1701.00160
[16]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144
[17]   Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning [J].
Hitaj, Briland ;
Ateniese, Giuseppe ;
Perez-Cruz, Fernando .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :603-618
[18]   CONFINEMENT PROBLEM [J].
LAMPSON, BW .
COMMUNICATIONS OF THE ACM, 1973, 16 (10) :613-615
[19]   Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks [J].
Liu, Kang ;
Dolan-Gavitt, Brendan ;
Garg, Siddharth .
RESEARCH IN ATTACKS, INTRUSIONS, AND DEFENSES, RAID 2018, 2018, 11050 :273-294
[20]   Neural Trojans [J].
Liu, Yuntao ;
Xie, Yang ;
Srivastava, Ankur .
2017 IEEE 35TH INTERNATIONAL CONFERENCE ON COMPUTER DESIGN (ICCD), 2017, :45-48