Boosting the transferability of adversarial CAPTCHAs

被引:1
|
作者
Xu, Zisheng [1 ]
Yan, Qiao [1 ]
机构
[1] Shenzhen Univ, Coll Comp & Software, Shenzhen 518000, Guangdong Provi, Peoples R China
关键词
Adversarial examples; Adversarial CAPTCHAs; Feature space attack;
D O I
10.1016/j.cose.2024.104000
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is a test to distinguish humans and computers. Since attackers can achieve high accuracy in recognizing the CAPTCHAs using deep learning models, geometric transformations are added to the CAPTCHAs to disturb deep learning model recognition. However, excessive geometric transformations might also affect humans' recognition of the CAPTCHA. Adversarial CAPTCHAs are special CAPTCHAs that can disrupt deep learning models without affecting humans. Previous works of adversarial CAPTCHAs mainly focus on defending the filtering attack. In real-world scenarios, the attackers' models are inaccessible when generating adversarial CAPTCHAs, and the attackers may use models with different architectures, thus it is crucial to improve the transferability of the adversarial CAPTCHAs. We propose CFA, a method to generate more transferable adversarial CAPTCHAs focusing on altering content features in the original CAPTCHA. We use the attack success rate as our metric to evaluate the effectiveness of our method when attacking various models. A higher attack success rate means a higher level of preventing models from recognizing the CAPTCHAs. The experiment shows that our method can effectively attack various models, even when facing possible defense methods that the attacker might use. Our method outperforms other feature space attacks and provides a more secure version of adversarial CAPTCHAs.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] StyLess: Boosting the Transferability of Adversarial Examples
    Liang, Kaisheng
    Xiao, Bin
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 8163 - 8172
  • [2] Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation
    Qin, Zeyu
    Fan, Yanbo
    Liu, Yi
    Shen, Li
    Zhang, Yong
    Wang, Jue
    Wu, Baoyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [3] An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability
    Chen, Bin
    Yin, Jiali
    Chen, Shukai
    Chen, Bohao
    Liu, Ximeng
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 4466 - 4475
  • [4] Boosting the Transferability of Adversarial Samples via Attention
    Wu, Weibin
    Su, Yuxin
    Chen, Xixian
    Zhao, Shenglin
    King, Irwin
    Lyu, Michael R.
    Tai, Yu-Wing
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 1158 - 1167
  • [5] Boosting Adversarial Transferability Through Intermediate Feature
    He, Chenghai
    Li, Xiaoqian
    Zhang, Xiaohang
    Zhang, Kai
    Li, Hailing
    Xiong, Gang
    Li, Xuan
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT V, 2023, 14258 : 28 - 39
  • [6] Simple Techniques are Sufficient for Boosting Adversarial Transferability
    Zhang, Chaoning
    Benz, Philipp
    Karjauv, Adil
    Kweon, In So
    Hong, Choong Seon
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 8486 - 8494
  • [7] Adversarial CAPTCHAs
    Shi, Chenghui
    Xu, Xiaogang
    Ji, Shouling
    Bu, Kai
    Chen, Jianhai
    Beyah, Raheem
    Wang, Ting
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (07) : 6095 - 6108
  • [8] Stochastic Variance Reduced Ensemble Adversarial Attack for Boosting the Adversarial Transferability
    Xiong, Yifeng
    Lin, Jiadong
    Zhang, Min
    Hopcroft, John E.
    He, Kun
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14963 - 14972
  • [9] Boosting the transferability of adversarial attacks with global momentum initialization
    Wang, Jiafeng
    Chen, Zhaoyu
    Jiang, Kaixun
    Yang, Dingkang
    Hong, Lingyi
    Guo, Pinxue
    Guo, Haijing
    Zhang, Wenqiang
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 255
  • [10] Gradient Aggregation Boosting Adversarial Examples Transferability Method
    Deng, Shiyun
    Ling, Jie
    Computer Engineering and Applications, 2024, 60 (14) : 275 - 282