To make yourself invisible with Adversarial Semantic Contours

被引:2
作者
Zhang, Yichi [1 ]
Zhu, Zijian [2 ]
Su, Hang [1 ]
Zhu, Jun [1 ]
Zheng, Shibao [2 ]
He, Yuan [3 ]
Xue, Hui [3 ]
机构
[1] Tsinghua Univ, Inst Artificial Intelligence, Dept Comp Sci & Technol, THBI Lab, Beijing 100084, Peoples R China
[2] Shanghai Jiao Tong Univ, Inst Image Commun, Network Engn, Shanghai 200240, Peoples R China
[3] Alibaba Grp, Hangzhou 311121, Peoples R China
关键词
Adversarial examples; Sparse attacks; Object detection; Detection transformer;
D O I
10.1016/j.cviu.2023.103659
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Modern object detectors are vulnerable to adversarial examples, which may bring risks to real-world applications. The sparse attack is an important task which, compared with the popular adversarial perturbation on the whole image, needs to select the potential pixels that is generally regularized by an l(0)-norm constraint, and simultaneously optimize the corresponding texture. The non-differentiability of l(0) norm brings challenges and many works on attacking object detection adopted manually-designed patterns to address them, which are meaningless and independent of objects, and therefore lead to relatively poor attack performance. In this paper, we propose Adversarial Semantic Contour (ASC), an MAP estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour. The object contour prior effectively reduces the search space of pixel selection and improves the attack by introducing more semantic bias. Extensive experiments demonstrate that ASC can corrupt the prediction of 9 modern detectors with different architectures (e.g., one-stage, two-stage and Transformer) by modifying fewer than 5% of the pixels of the object area in COCO in white-box scenario and around 10% of those in black-box scenario. We further extend the attack to datasets for autonomous driving systems to verify the effectiveness. We conclude with cautions about contour being the common weakness of object detectors with various architecture and the care needed in applying them in safety-sensitive scenarios.
引用
收藏
页数:11
相关论文
共 23 条
  • [1] A Gradual Adversarial Training Method for Semantic Segmentation
    Zan, Yinkai
    Lu, Pingping
    Meng, Tingyu
    REMOTE SENSING, 2024, 16 (22)
  • [2] Invisible Adversarial Watermarking: A Novel Security Mechanism for Enhancing Copyright Protection
    Wang, Jinwei
    Wang, Haihua
    Zhang, Jiawei
    Wu, Hao
    Luo, Xiangyang
    Ma, Bin
    ACM Transactions on Multimedia Computing, Communications and Applications, 2024, 21 (02)
  • [3] On Adversarial Robustness of Semantic Segmentation Models for Automated Driving
    Yin, Huilin
    Wang, Ruining
    Liu, Boyu
    Yan, Jun
    2022 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2022, : 867 - 873
  • [4] Invisible Adversarial Attack against Deep Neural Networks: An Adaptive Penalization Approach
    Wang, Zhibo
    Song, Mengkai
    Zheng, Siyan
    Zhang, Zhifei
    Song, Yang
    Wang, Qian
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (03) : 1474 - 1488
  • [5] Semantic-aware short path adversarial training for cross-domain semantic segmentation
    Shan, Yuhu
    Chew, Chee Meng
    Lu, Wen Feng
    NEUROCOMPUTING, 2020, 380 : 125 - 132
  • [6] Semantic Adversarial Attacks on Face Recognition Through Significant Attributes
    Khedr, Yasmeen M.
    Xiong, Yifeng
    He, Kun
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2023, 16 (01)
  • [7] Generating traceable adversarial text examples by watermarking in the semantic space
    Li, Mingjie
    Wu, Hanzhou
    Zhang, Xinpeng
    JOURNAL OF ELECTRONIC IMAGING, 2022, 31 (06)
  • [8] ENHANCING MODEL ROBUSTNESS BY INCORPORATING ADVERSARIAL KNOWLEDGE INTO SEMANTIC REPRESENTATION
    Li, Jinfeng
    Du, Tianyu
    Liu, Xiangyu
    Zhang, Rong
    Xue, Hui
    Ji, Shouling
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7708 - 7712
  • [9] Semantic Adversarial Attacks on Face Recognition Through Significant Attributes
    Yasmeen M. Khedr
    Yifeng Xiong
    Kun He
    International Journal of Computational Intelligence Systems, 16
  • [10] BHI: Embedded invisible watermark as adversarial example based on Basin-Hopping improvement
    Liang, Jinchao
    Feng, Zijing
    Chen, Riqing
    Liu, Xiaolong
    INFORMATION SCIENCES, 2023, 640