Live Demonstration: Man-in-the-Middle Attack on Edge Artificial Intelligence

被引:0
作者
Hu, Bowen [1 ]
He, Weiyang [1 ]
Wang, Si [1 ]
Liu, Wenye [1 ]
Chang, Chip-Hong [1 ]
机构
[1] Nanyang Technol Univ, Ctr Integrated Circuits & Syst, Sch Elect & Elect Engn, Singapore, Singapore
来源
2024 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, ISCAS 2024 | 2024年
基金
新加坡国家研究基金会;
关键词
D O I
10.1109/ISCAS58744.2024.10558371
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Deep neural networks (DNNs) are susceptible to evasion attacks. However, digital adversarial examples are typically applied to pre-captured static images. The perturbations are generated by loss optimization with knowledge of target model hyperparameters and are added offline. Physical adversarial examples, on the other hand, tamper with the physical target or use a realistically fabricated target to fool the DNN. A sufficient number of pristine target samples captured under different varying environmental conditions are required to create the physical adversarial perturbations. Both digital and physical input evasion attacks are not robust against dynamic object scene variations and the adversarial effects are often weakened by model reduction and quantization when the DNNs are implemented on edge artificial intelligence (AI) accelerator platforms. This demonstration presents a practical man-in-the-middle (MITM) attack on an edge DNN first reported in In A tiny MIPI FPGA chip with hardened CSI-2 and D-PHY blocks is attached between the camera and the edge AI accelerator to inject unobtrusive stripes onto the RAW image data. The attack is less influenced by dynamic context variations such as changes in viewing angle, illumination, and distance of the target from the camera.
引用
收藏
页数:1
相关论文
共 1 条
[1]   Dynamic Accuracy Analysis of a 5PSS/UPU Parallel Mechanism Based on Rigid-Flexible Coupled Modeling [J].
Li, Yanbiao ;
Wang, Zesheng ;
Chen, Chaoqun ;
Xu, Taotao ;
Chen, Bo .
CHINESE JOURNAL OF MECHANICAL ENGINEERING, 2022, 35 (01)