Boosting the transferability of adversarial CAPTCHAs

被引:1
|
作者
Xu, Zisheng [1 ]
Yan, Qiao [1 ]
机构
[1] Shenzhen Univ, Coll Comp & Software, Shenzhen 518000, Guangdong Provi, Peoples R China
关键词
Adversarial examples; Adversarial CAPTCHAs; Feature space attack;
D O I
10.1016/j.cose.2024.104000
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is a test to distinguish humans and computers. Since attackers can achieve high accuracy in recognizing the CAPTCHAs using deep learning models, geometric transformations are added to the CAPTCHAs to disturb deep learning model recognition. However, excessive geometric transformations might also affect humans' recognition of the CAPTCHA. Adversarial CAPTCHAs are special CAPTCHAs that can disrupt deep learning models without affecting humans. Previous works of adversarial CAPTCHAs mainly focus on defending the filtering attack. In real-world scenarios, the attackers' models are inaccessible when generating adversarial CAPTCHAs, and the attackers may use models with different architectures, thus it is crucial to improve the transferability of the adversarial CAPTCHAs. We propose CFA, a method to generate more transferable adversarial CAPTCHAs focusing on altering content features in the original CAPTCHA. We use the attack success rate as our metric to evaluate the effectiveness of our method when attacking various models. A higher attack success rate means a higher level of preventing models from recognizing the CAPTCHAs. The experiment shows that our method can effectively attack various models, even when facing possible defense methods that the attacker might use. Our method outperforms other feature space attacks and provides a more secure version of adversarial CAPTCHAs.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] The Ultimate Combo: Boosting Adversarial Example Transferability by Composing Data Augmentations
    Yun, Zebin
    Weingarten, Achi-Or
    Ronen, Eyal
    Sharif, Mahmood
    PROCEEDINGS OF THE 2024 WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2024, 2024, : 113 - 124
  • [22] Boosting Adversarial Transferability via Logits Mixup With Dominant Decomposed Feature
    Weng, Juanjuan
    Luo, Zhiming
    Li, Shaozi
    Lin, Dazhen
    Zhong, Zhun
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 8939 - 8951
  • [23] Boosting Transferability of Targeted Adversarial Examples via Hierarchical Generative Networks
    Yang, Xiao
    Dong, Yinpeng
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    COMPUTER VISION - ECCV 2022, PT IV, 2022, 13664 : 725 - 742
  • [24] Boosting Adversarial Transferability with Shallow-Feature Attack on SAR Images
    Lin, Gengyou
    Pan, Zhisong
    Zhou, Xingyu
    Duan, Yexin
    Bai, Wei
    Zhan, Dazhi
    Zhu, Leqian
    Zhao, Gaoqiang
    Li, Tao
    REMOTE SENSING, 2023, 15 (10)
  • [25] Boosting the transferability of adversarial attacks with adaptive points selecting in temporal neighborhood
    Zhu, Hegui
    Zheng, Haoran
    Zhu, Ying
    Sui, Xiaoyan
    INFORMATION SCIENCES, 2023, 641
  • [26] LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity
    Gubri, Martin
    Cordy, Maxime
    Papadakis, Mike
    Le Traon, Yves
    Sen, Koushik
    COMPUTER VISION - ECCV 2022, PT IV, 2022, 13664 : 603 - 618
  • [27] Probability-Distribution-Guided Adversarial Sample Attacks for Boosting Transferability and Interpretability
    Li, Hongying
    Yu, Miaomiao
    Li, Xiaofei
    Zhang, Jun
    Li, Shuohao
    Lei, Jun
    Huang, Hairong
    MATHEMATICS, 2023, 11 (13)
  • [28] Boosting Adversarial Transferability across Model Genus by Deformation-Constrained Warping
    Lin, Qinliang
    Luo, Cheng
    Niu, Zenghao
    He, Xilin
    Xie, Weicheng
    Hou, Yuanbo
    Shen, Linlin
    Song, Siyang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 4, 2024, : 3459 - 3467
  • [29] Boosting Adversarial Transferability via Relative Feature Importance-Aware Attacks
    Li, Jian-Wei
    Shao, Wen-Ze
    Sun, Yu-Bao
    Wang, Li-Qian
    Ge, Qi
    Xiao, Liang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2025, 20 : 3489 - 3504
  • [30] MixCam-attack: Boosting the transferability of adversarial examples with targeted data augmentation
    Guo, Sensen
    Li, Xiaoyu
    Zhu, Peican
    Wang, Baocang
    Mu, Zhiying
    Zhao, Jinxiong
    INFORMATION SCIENCES, 2024, 657