DNN Model Architecture Fingerprinting Attack on CPU-GPU Edge Devices

被引:8
|
作者
Patwari, Kartik [1 ]
Hafiz, Syed Mahbub [1 ]
Wang, Han [1 ]
Homayoun, Houman [1 ]
Shafiq, Zubair [1 ]
Chuah, Chen-Nee [1 ]
机构
[1] Univ Calif Davis, Davis, CA 95616 USA
来源
2022 IEEE 7TH EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY (EUROS&P 2022) | 2022年
关键词
DNN Model Architecture Fingerprinting; Side-Channel Attack; GPU-enabled Embedded System;
D O I
10.1109/EuroSP53844.2022.00029
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Embedded systems for edge computing are getting more powerful, and some are equipped with a GPU to enable on-device deep neural network (DNN) learning tasks such as image classification and object detection. Such DNN-based applications frequently deal with sensitive user data, and their architectures are considered intellectual property to be protected. We investigate a potential avenue of fingerprinting attack to identify the (running) DNN model architecture family (out of state-of-the-art DNN categories) on CPU-GPU edge devices. We exploit a stealthy analysis of aggregate system-level side-channel information such as memory, CPU, and GPU usage available at the user-space level. To the best of our knowledge, this is the first attack of its kind that does not require physical access and/or sudo access to the victim device and only collects the system traces passively, as opposed to most of the existing reverse-engineering-based DNN model architecture extraction attacks. We perform feature selection analysis and supervised machine learning-based classification to detect the model architecture. With a combination of RAM, CPU, and GPU features and a Random Forest-based classifier, our proposed attack classifies a known DNN model into its model architecture family with 99% accuracy. Also, the introduced attack is so transferable that it can detect an unknown DNN model into the right DNN architecture category with 87.2% accuracy. Our rigorous feature analysis illustrates that memory usage (RAM) is a critical feature for such fingerprinting. Furthermore, we successfully replicate this attack on two different CPU-GPU platforms and observe similar experimental results that exhibit the capability of platform portability of the attack. Also, we investigate the robustness of the proposed attack to varying background noises and a modified DNN pipeline. Besides, we exhibit that the leakage of model architecture family information from this stealthy attack can strengthen an adversarial attack against a victim DNN model by 2x.
引用
收藏
页码:337 / 355
页数:19
相关论文
empty
未找到相关数据