Improving the transferability of adversarial examples with separable positive and negative disturbances

被引:1
|
作者
Yan, Yuanjie [1 ,2 ]
Bu, Yuxuan [1 ,2 ]
Shen, Furao [1 ,4 ]
Zhao, Jian [3 ]
机构
[1] Nanjing Univ, Natl Key Lab Novel Software Technol, Nanjing, Peoples R China
[2] Nanjing Univ, Dept Comp Sci & Technol, Nanjing, Peoples R China
[3] Nanjing Univ, Dept Elect Sci & Engn, Nanjing, Peoples R China
[4] Nanjing Normal Univ, Sch Artificial Intelligence, Nanjing, Peoples R China
来源
NEURAL COMPUTING & APPLICATIONS | 2024年 / 36卷 / 07期
基金
中国国家自然科学基金;
关键词
Adversarial examples; Transferability; Black-box attack;
D O I
10.1007/s00521-023-09259-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial examples demonstrate the vulnerability of white-box models but exhibit weak transferability to black-box models. In image processing, each adversarial example usually consists of original image and disturbance. The disturbances are essential for the adversarial examples, determining the attack success rate on black-box models. To improve the transferability, we propose a new white-box attack method called separable positive and negative disturbance (SPND). SPND optimizes the positive and negative perturbations instead of the adversarial examples. SPND also smooths the search space by replacing constrained disturbances with unconstrained variables, which improves the success rate of attacking the black-box model. Our method outperforms the other attack methods in the MNIST and CIFAR10 datasets. In the ImageNet dataset, the black-box attack success rate of SPND exceeds the optimal CW method by nearly ten percentage points under the perturbation of L-infinity=0.3
引用
收藏
页码:3725 / 3736
页数:12
相关论文
共 50 条
  • [1] Improving the transferability of adversarial examples with separable positive and negative disturbances
    Yuanjie Yan
    Yuxuan Bu
    Furao Shen
    Jian Zhao
    Neural Computing and Applications, 2024, 36 : 3725 - 3736
  • [2] Improving the transferability of adversarial examples with path tuning
    Li, Tianyu
    Li, Xiaoyu
    Ke, Wuping
    Tian, Xuwei
    Zheng, Desheng
    Lu, Chao
    APPLIED INTELLIGENCE, 2024, 54 (23) : 12194 - 12214
  • [3] Improving Transferability of Adversarial Examples with Input Diversity
    Xie, Cihang
    Zhang, Zhishuai
    Zhou, Yuyin
    Bai, Song
    Wang, Jianyu
    Ren, Zhou
    Yuille, Alan
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 2725 - 2734
  • [4] Improving the Transferability of Adversarial Examples with Diverse Gradients
    Cao, Yangjie
    Wang, Haobo
    Zhu, Chenxi
    Zhuang, Yan
    Li, Jie
    Chen, Xianfu
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [5] Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer
    Ge, Zhijin
    Shang, Fanhua
    Liu, Hongying
    Liu, Yuanyuan
    Wan, Liang
    Feng, Wei
    Wang, Xiaosen
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4440 - 4449
  • [6] Improving the transferability of adversarial examples through neighborhood attribution
    Ke, Wuping
    Zheng, Desheng
    Li, Xiaoyu
    He, Yuanhang
    Li, Tianyu
    Min, Fan
    KNOWLEDGE-BASED SYSTEMS, 2024, 296
  • [7] Improving the transferability of adversarial examples via direction tuning
    Yang, Xiangyuan
    Lin, Jie
    Zhang, Hanlin
    Yang, Xinyu
    Zhao, Peng
    INFORMATION SCIENCES, 2023, 647
  • [8] FDT: Improving the transferability of adversarial examples with frequency domain transformation
    Ling, Jie
    Chen, Jinhui
    Li, Honglei
    COMPUTERS & SECURITY, 2024, 144
  • [9] Improving transferability of adversarial examples by saliency distribution and data augmentation
    Dong, Yansong
    Tang, Long
    Tian, Cong
    Yu, Bin
    Duan, Zhenhua
    COMPUTERS & SECURITY, 2022, 120
  • [10] Ranking the Transferability of Adversarial Examples
    Levy, Moshe
    Amit, Guy
    Elovici, Yuval
    Mirsky, Yisroel
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (05)