Exploiting Deep Neural Networks as Covert Channels

被引:0
作者
Pishbin, Hora Saadaat [1 ]
Bidgoly, Amir Jalaly [1 ]
机构
[1] Univ Qom, Dept Informat Technol & Comp Engn, Qom 3716146611, Iran
关键词
Data models; Computational modeling; Deep learning; Receivers; Training; Artificial neural networks; Malware; Trustworthy machine learning; deep neural network; covert channel; deep learning attack; concealment;
D O I
10.1109/TDSC.2023.3300072
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the increasing development of deep learning models, the security of these models has become more important. In this work, for the first time, we have investigated the possibility of abusing the deep model as a covert channel. The concept of a covert channel is to use a channel that is not designed for information exchange for transmitting a covert message. This work studies how a deep model can be used by an adversary as a covert channel. The proposed approach is using an end-to-end training deep model called the covert model to produce artificial data which includes some covert messages. This artificial data is the input of the deep model, which is aimed at being exploited as a covert channel, in such a way that the signal will be covered in the output of this model. To achieve indistinguishability of concealment, generative adversarial networks are used. The results show that it is possible to have a covert channel with an acceptable message transmission power in well-known deep models such as the ResNet and InceptionV3 models. Results of case studies indicate the signal-to-noise ratio (SNR) of 12.67, the bit error rate (BER) of 0.08, and the accuracy of the deep model used to hide the signal reaches 92%.
引用
收藏
页码:2115 / 2126
页数:12
相关论文
共 50 条
  • [21] Convex Formulation of Overparameterized Deep Neural Networks
    Fang, Cong
    Gu, Yihong
    Zhang, Weizhong
    Zhang, Tong
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2022, 68 (08) : 5340 - 5352
  • [22] Deep Neural Networks and Tabular Data: A Survey
    Borisov, Vadim
    Leemann, Tobias
    Sessler, Kathrin
    Haug, Johannes
    Pawelczyk, Martin
    Kasneci, Gjergji
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (06) : 7499 - 7519
  • [23] Diffense: Defense Against Backdoor Attacks on Deep Neural Networks With Latent Diffusion
    Hu, Bowen
    Chang, Chip-Hong
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2024, 14 (04) : 729 - 742
  • [24] Exploiting error control in network traffic for robust, high rate covert channels
    Geissler, William K.
    McEachen, John C.
    INTERNATIONAL JOURNAL OF ELECTRONIC SECURITY AND DIGITAL FORENSICS, 2007, 1 (02) : 180 - 193
  • [25] Privacy-Preserving Computation Offloading for Parallel Deep Neural Networks Training
    Mao, Yunlong
    Hong, Wenbo
    Wang, Heng
    Li, Qun
    Zhong, Sheng
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 32 (07) : 1777 - 1788
  • [26] A Deep Architecture for Content-based Recommendations Exploiting Recurrent Neural Networks
    Suglia, Alessandro
    Greco, Claudio
    Musto, Cataldo
    de Gemmis, Marco
    Lops, Pasquale
    Semeraro, Giovanni
    PROCEEDINGS OF THE 25TH CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION (UMAP'17), 2017, : 202 - 211
  • [27] Covert channels in ad-hoc wireless networks
    Li, Song
    Ephremides, Anthony
    AD HOC NETWORKS, 2010, 8 (02) : 135 - 147
  • [28] Tweaking Deep Neural Networks
    Kim, Jinwook
    Yoon, Heeyong
    Kim, Min-Soo
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (09) : 5715 - 5728
  • [29] Ranking with Deep Neural Networks
    Prakash, Chandan
    Sarkar, Amitrajit
    PROCEEDINGS OF 2018 FIFTH INTERNATIONAL CONFERENCE ON EMERGING APPLICATIONS OF INFORMATION TECHNOLOGY (EAIT), 2018,
  • [30] Orthogonal Deep Neural Networks
    Li, Shuai
    Jia, Kui
    Wen, Yuxin
    Liu, Tongliang
    Tao, Dacheng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (04) : 1352 - 1368