Realistic Adversarial Attacks on Object Detectors Using Generative Models

被引:0
作者
D. Shelepneva [1 ]
K. Arkhipenko [1 ]
机构
[1] Ivannikov Institute for System Programming of the RAS, Moscow
关键词
adversarial examples; diffusion models; generative adversarial networks; object detectors;
D O I
10.1007/s10958-024-07430-4
中图分类号
学科分类号
摘要
An important limitation of existing adversarial attacks on real-world object detectors lies in their threat model: adversarial patch-based methods often produce suspicious images while image generation approaches do not restrict the attacker’s capabilities of modifying the original scene. We design a threat model where the attacker modifies individual image segments and is required to produce realistic images. We also develop and evaluate a white-box attack that utilizes generative adversarial nets and diffusion models as a generator of malicious images. Our attack is able to produce high-fidelity images as measured by the Fréchet inception distance (FID) and reduces the mAP of Faster R-CNN model by over 0.2 on Cityscapes and COCO-Stuff datasets. A PyTorch implementation of our attack is available at https://github.com/DariaShel/gan-attack. © The Author(s), under exclusive licence to Springer Nature Switzerland AG 2024.
引用
收藏
页码:245 / 254
页数:9
相关论文
共 32 条
  • [11] Guo C., Rana M., Ciss M., van Der Maaten L., Countering adversarial images using input transformations, (2018)
  • [12] Heusel M., Ramsauer H., Unterthiner T., Nessler B., Hochreiter S., GANs trained by a two time-scale update rule converge to a local Nash equilibrium, In: Advances in Neural Information Processing Systems (NIPS, (2017)
  • [13] Hu Y., Chen J.-C., Kung B.-H., Hua K.-L., Tan D.S., Naturalistic physical adversarial patch for object detectors, In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7828-7837, (2021)
  • [14] Isola P., Zhu J.-Y., Zhou T., Efros A.A., , “Image-to-image translation with conditional adversarial networks,”, In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5967-5976, (2017)
  • [15] Kang W.-C., Fang C., Wang Z., McAuley J., Visually-aware fashion recommendation and design with generative image models, In: Proceedings of the IEEE International Conference on Data Mining, pp. 207-216, (2017)
  • [16] Karras T., Laine S., Aittala M., Hellsten J., Lehtinen J., Aila T., Analyzing and improving the image quality of StyleGAN,”, In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, pp. 8107-8116, (2020)
  • [17] Krizhevsky A., Sutskever I., Hinton G., ImageNet classification with deep convolutional neural networks, Advances in Neural Information Processing Systems, 25, (2012)
  • [18] Kynkaaniemi T., Karras T., Aittala M., Aila T., Lehtinen J., The role of ImageNet classes in Fréchet inception distance, Proceedings of the International Conference on Learning Representations, (2023)
  • [19] Litjens G.J.S., Kooi T., Bejnordi B.E., Setio A.A.A., Ciompi F., Ghafoorian M., van Der Laak J., van Ginneken B., SAnchez C.I., “A survey on deep learning in medical image analysis, Medical Image Analysis, 42, pp. 60-88, (2017)
  • [20] Liu J., Levine A., Lau C.P., Chellappa R., Feizi S., Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, pp. 14973-14982, (2022)