Boosting the transferability of adversarial CAPTCHAs

被引:1
|
作者
Xu, Zisheng [1 ]
Yan, Qiao [1 ]
机构
[1] Shenzhen Univ, Coll Comp & Software, Shenzhen 518000, Guangdong Provi, Peoples R China
关键词
Adversarial examples; Adversarial CAPTCHAs; Feature space attack;
D O I
10.1016/j.cose.2024.104000
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is a test to distinguish humans and computers. Since attackers can achieve high accuracy in recognizing the CAPTCHAs using deep learning models, geometric transformations are added to the CAPTCHAs to disturb deep learning model recognition. However, excessive geometric transformations might also affect humans' recognition of the CAPTCHA. Adversarial CAPTCHAs are special CAPTCHAs that can disrupt deep learning models without affecting humans. Previous works of adversarial CAPTCHAs mainly focus on defending the filtering attack. In real-world scenarios, the attackers' models are inaccessible when generating adversarial CAPTCHAs, and the attackers may use models with different architectures, thus it is crucial to improve the transferability of the adversarial CAPTCHAs. We propose CFA, a method to generate more transferable adversarial CAPTCHAs focusing on altering content features in the original CAPTCHA. We use the attack success rate as our metric to evaluate the effectiveness of our method when attacking various models. A higher attack success rate means a higher level of preventing models from recognizing the CAPTCHAs. The experiment shows that our method can effectively attack various models, even when facing possible defense methods that the attacker might use. Our method outperforms other feature space attacks and provides a more secure version of adversarial CAPTCHAs.
引用
收藏
页数:9
相关论文
共 50 条
  • [41] Uncovering the Connections Between Adversarial Transferability and Knowledge Transferability
    Liang, Kaizhao
    Zhang, Jacky Y.
    Wang, Boxin
    Yang, Zhuolin
    Koyejo, Oluwasanmi
    Li, Bo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [42] Ranking the Transferability of Adversarial Examples
    Levy, Moshe
    Amit, Guy
    Elovici, Yuval
    Mirsky, Yisroel
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (05)
  • [43] Exploring Transferability on Adversarial Attacks
    Alvarez, Enrique
    Alvarez, Rafael
    Cazorla, Miguel
    IEEE ACCESS, 2023, 11 : 105545 - 105556
  • [44] On the Adversarial Transferability of ConvMixer Models
    Iijima, Ryota
    Tanaka, Miki
    Echizen, Isao
    Kiya, Hitoshi
    PROCEEDINGS OF 2022 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2022, : 1826 - 1830
  • [45] Boosting Transferability in Vision-Language Attacks via Diversification Along the Intersection Region of Adversarial Trajectory
    Gao, Sensen
    Jia, Xiaojun
    Rene, Xuhong
    Tsang, Ivor
    Guo, Qing
    COMPUTER VISION-ECCV 2024, PT LVII, 2025, 15115 : 442 - 460
  • [46] Using Generative Adversarial Networks to Break and Protect Text Captchas
    Ye, Guixin
    Tang, Zhanyong
    Fang, Dingyi
    Zhu, Zhanxing
    Feng, Yansong
    Xu, Pengfei
    Chen, Xiaojiang
    Han, Jungong
    Wang, Zheng
    ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2020, 23 (02)
  • [47] Improving the adversarial transferability with relational graphs ensemble adversarial attack
    Pi, Jiatian
    Luo, Chaoyang
    Xia, Fen
    Jiang, Ning
    Wu, Haiying
    Wu, Zhiyou
    FRONTIERS IN NEUROSCIENCE, 2023, 16
  • [48] An approach to improve transferability of adversarial examples
    Zhang, Weihan
    Guo, Ying
    PHYSICAL COMMUNICATION, 2024, 64
  • [49] Remix: Towards the transferability of adversarial examples
    Zhao, Hongzhi
    Hao, Lingguang
    Hao, Kuangrong
    Wei, Bing
    Cai, Xin
    NEURAL NETWORKS, 2023, 163 : 367 - 378
  • [50] Dynamic defenses and the transferability of adversarial examples
    Thomas, Sam
    Koleini, Farnoosh
    Tabrizi, Nasseh
    2022 IEEE 4TH INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS, AND APPLICATIONS, TPS-ISA, 2022, : 276 - 284