RIATIG: Reliable and Imperceptible Adversarial Text-to-Image Generation with Natural Prompts

被引:10
作者
Liu, Han [1 ]
Wu, Yuhao [1 ]
Zhai, Shixuan [1 ]
Yuan, Bo [2 ]
Zhang, Ning [1 ]
机构
[1] Washington Univ, St Louis, MO 63110 USA
[2] Rutgers State Univ, Piscataway, NJ USA
来源
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2023年
关键词
D O I
10.1109/CVPR52729.2023.01972
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The field of text-to-image generation has made remarkable strides in creating high-fidelity and photorealistic images. As this technology gains popularity, there is a growing concern about its potential security risks. However, there has been limited exploration into the robustness of these models from an adversarial perspective. Existing research has primarily focused on untargeted settings, and lacks holistic consideration for reliability (attack success rate) and stealthiness (imperceptibility). In this paper, we propose RIATIG, a reliable and imperceptible adversarial attack against text-to-image models via inconspicuous examples. By formulating the example crafting as an optimization process and solving it using a genetic-based method, our proposed attack can generate imperceptible prompts for text-to-image generation models in a reliable way. Evaluation of six popular text-to-image generation models demonstrates the efficiency and stealthiness of our attack in both white-box and black-box settings. To allow the community to build on top of our findings, we've made the artifacts available(1).
引用
收藏
页码:20585 / 20594
页数:10
相关论文
共 51 条
  • [1] Alzantot M., 2018, EMNLP
  • [2] [Anonymous], 2015, ARXIV151102793
  • [3] [Anonymous], 2017, ARXIV170408006
  • [4] Biggio B., 2013, MACHINE LEARNING KNO, P387, DOI [10.1007/978-3-642-40994-3_25, DOI 10.1007/978-3-642-40994-3_25]
  • [5] Birhane Abeba, 2021, ARXIV211001963
  • [6] Brust Clemens-Alexander, 2018, ARXIV181107120
  • [7] Cer D, 2018, CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018): PROCEEDINGS OF SYSTEM DEMONSTRATIONS, P169
  • [8] Chelba Ciprian, 2013, Technical report
  • [9] Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security
    Chesney, Bobby
    Citron, Danielle
    [J]. CALIFORNIA LAW REVIEW, 2019, 107 (06) : 1753 - 1819
  • [10] Daras Giannis, 2022, ARXIV220600169