Physical Adversarial Attack Meets Computer Vision: A Decade Survey

被引:2
作者
Wei, Hui [1 ]
Tang, Hao [2 ]
Jia, Xuemei [1 ]
Wang, Zhixiang [3 ,4 ]
Yu, Hanxun [5 ]
Li, Zhubo [6 ]
Satoh, Shin'ichi [3 ,4 ]
Van Gool, Luc [7 ,8 ,9 ]
Wang, Zheng [1 ]
机构
[1] Wuhan Univ, Natl Engn Res Ctr Multimedia Software, Sch Comp Sci, Wuhan 430072, Peoples R China
[2] Peking Univ, Sch Comp Sci, Natl Key Lab Multimedia Informat Proc, Beijing 100871, Peoples R China
[3] Natl Inst Informat, Digital Content & Media Sci Res Div, Chiyoda City 1010003, Japan
[4] Univ Tokyo, Grad Sch Informat Sci & Technol, Dept Informat & Commun Engn, Bunkyo City 1138654, Japan
[5] Zhejiang Univ, Coll Software & Technol, Hangzhou 310027, Peoples R China
[6] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Peoples R China
[7] Swiss Fed Inst Technol, Comp Vis Lab, CH-8092 Zurich, Switzerland
[8] Katholieke Univ Leuven, B-3000 Leuven, Belgium
[9] INSAIT, Sofia 1784, Bulgaria
基金
中国国家自然科学基金;
关键词
Perturbation methods; Data models; Biological system modeling; Task analysis; Predictive models; Computer vision; Surveys; Adversarial attack; adversarial medium; computer vision; physical world; survey; NEURAL-NETWORK;
D O I
10.1109/TPAMI.2024.3430860
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the impressive achievements of Deep Neural Networks (DNNs) in computer vision, their vulnerability to adversarial attacks remains a critical concern. Extensive research has demonstrated that incorporating sophisticated perturbations into input images can lead to a catastrophic degradation in DNNs' performance. This perplexing phenomenon not only exists in the digital space but also in the physical world. Consequently, it becomes imperative to evaluate the security of DNNs-based systems to ensure their safe deployment in real-world scenarios, particularly in security-sensitive applications. To facilitate a profound understanding of this topic, this paper presents a comprehensive overview of physical adversarial attacks. First, we distill four general steps for launching physical adversarial attacks. Building upon this foundation, we uncover the pervasive role of artifacts carrying adversarial perturbations in the physical world. These artifacts influence each step. To denote them, we introduce a new term: adversarial medium. Then, we take the first step to systematically evaluate the performance of physical adversarial attacks, taking the adversarial medium as a first attempt. Our proposed evaluation metric, hiPAA, comprises six perspectives: Effectiveness, Stealthiness, Robustness, Practicability, Aesthetics, and Economics. We also provide comparative results across task categories, together with insightful observations and suggestions for future research directions.
引用
收藏
页码:9797 / 9817
页数:21
相关论文
共 180 条
  • [1] A, 2002, Hatcher,AlgebraicTopology.Cambridge,U.K.:CambridgeUniv.Press
  • [2] 2D Human Pose Estimation: New Benchmark and State of the Art Analysis
    Andriluka, Mykhaylo
    Pishchulin, Leonid
    Gehler, Peter
    Schiele, Bernt
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 3686 - 3693
  • [3] Athalye A, 2018, PR MACH LEARN RES, V80
  • [4] SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
    Badrinarayanan, Vijay
    Kendall, Alex
    Cipolla, Roberto
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) : 2481 - 2495
  • [5] Bagdasaryan E, 2021, PROCEEDINGS OF THE 30TH USENIX SECURITY SYMPOSIUM, P1505
  • [6] Brock A, 2019, Arxiv, DOI [arXiv:1809.11096, 10.48550/arXiv.1809.11096]
  • [7] Brown T.B., 2017, P INT C NEUR INF PRO
  • [8] Cai ZK, 2022, AAAI CONF ARTIF INTE, P149
  • [9] Zero-Query Transfer Attacks on Context-Aware Object Detectors
    Cai, Zikui
    Rane, Shantanu
    Brito, Alejandro E.
    Song, Chengyu
    Krishnamurthy, Srikanth, V
    Roy-Chowdhury, Amit K.
    Asif, M. Salman
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15004 - 15014
  • [10] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57