Physical Adversarial Attacks for Camera-Based Smart Systems: Current Trends, Categorization, Applications, Research Challenges, and Future Outlook

被引:6
作者
Guesmi, Amira [1 ]
Hanif, Muhammad Abdullah [1 ]
Ouni, Bassem [2 ]
Shafique, Muhammad [1 ]
机构
[1] New York Univ NYU, Div Engn, eBrain Lab, Abu Dhabi, U Arab Emirates
[2] Technol Innovat Inst TII, AI & Digital Sci Res Ctr, Abu Dhabi, U Arab Emirates
关键词
Machine learning security; physical adversarial attacks; smart systems; camera-based vision system robustness; security; trustworthy AI; computer vision; image classification; object detection; person detection; vehicle detection; semantic segmentation; monocular depth estimation; face recognition; person re-identification; optical flow; stealthy physical attacks; patch-based attacks; sticker-based attacks; camouflage techniques; light manipulation-based attacks; physical adversarial prints; imaging device manipulation; expectation over transformation; adversarial t-shirts; adversarial makeup; adversarial eyeglasses; adversarial masks; NEURAL-NETWORK; DEEP; ATTENTION;
D O I
10.1109/ACCESS.2023.3321118
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Neural Networks (DNNs) have shown impressive performance in computer vision tasks; however, their vulnerability to adversarial attacks raises concerns regarding their security and reliability. Extensive research has shown that DNNs can be compromised by carefully crafted perturbations, leading to significant performance degradation in both digital and physical domains. Therefore, ensuring the security of DNN-based systems is crucial, particularly in safety-critical domains such as autonomous driving, robotics, smart homes/cities, smart industries, video surveillance, and healthcare. In this paper, we present a comprehensive survey of the current trends focusing specifically on physical adversarial attacks. We aim to provide a thorough understanding of the concept of physical adversarial attacks, analyzing their key characteristics and distinguishing features. Furthermore, we explore the specific requirements and challenges associated with executing attacks in the physical world. Our article delves into various physical adversarial attack methods, categorized according to their target tasks in different applications, including classification, detection, face recognition, semantic segmentation and depth estimation. We assess the performance of these attack methods in terms of their effectiveness, stealthiness, and robustness. We examine how each technique strives to ensure the successful manipulation of DNNs while mitigating the risk of detection and withstanding real-world distortions. Lastly, we discuss the current challenges and outline potential future research directions in the field of physical adversarial attacks. We highlight the need for enhanced defense mechanisms, the exploration of novel attack strategies, the evaluation of attacks in different application domains, and the establishment of standardized benchmarks and evaluation criteria for physical adversarial attacks. Through this comprehensive survey, we aim to provide a valuable resource for researchers, practitioners, and policymakers to gain a holistic understanding of physical adversarial attacks in computer vision and facilitate the development of robust and secure DNN-based systems.
引用
收藏
页码:109617 / 109668
页数:52
相关论文
共 185 条
  • [1] Aishan Liu, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12358), P395, DOI 10.1007/978-3-030-58601-0_24
  • [2] Al-Qizwini M, 2017, IEEE INT VEH SYM, P89, DOI 10.1109/IVS.2017.7995703
  • [3] 2D Human Pose Estimation: New Benchmark and State of the Art Analysis
    Andriluka, Mykhaylo
    Pishchulin, Leonid
    Gehler, Peter
    Schiele, Bernt
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 3686 - 3693
  • [4] [Anonymous], FACE++
  • [5] [Anonymous], 2018, P 12 USENIX WORKSH O
  • [6] Athalye A., 2017, P INT C MACH LEARN, P284
  • [7] Biggio B., 2013, Lecture Notes in Artificial Intelligence, V8190
  • [8] Bochkovskiy A, 2020, Arxiv, DOI [arXiv:2004.10934, 10.48550/arXiv.2004.10934, DOI 10.48550/ARXIV.2004.10934]
  • [9] Bookstein F.L., 1993, Math. Methods Med. Imaging, V2, P3, DOI [10.1109/34.24792, DOI 10.1109/34.24792]
  • [10] Brown TB, 2018, Arxiv, DOI arXiv:1712.09665