Imperceptible Adversarial Attack via Invertible Neural Networks

被引:0
作者
Chen, Zihan [1 ]
Wang, Ziyue [1 ]
Huang, Jun-Jie [1 ]
Zhao, Wentao [1 ]
Liu, Xiao [1 ]
Guan, Dejian [1 ]
机构
[1] Natl Univ Def Technol, Coll Comp Sci, Changsha, Hunan, Peoples R China
来源
THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1 | 2023年
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adding perturbations via utilizing auxiliary gradient information or discarding existing details of the benign images are two common approaches for generating adversarial examples. Though visual imperceptibility is the desired property of adversarial examples, conventional adversarial attacks still generate traceable adversarial perturbations. In this paper, we introduce a novel Adversarial Attack via Invertible Neural Networks (AdvINN) method to produce robust and imperceptible adversarial examples. Specifically, AdvINN fully takes advantage of the information preservation property of Invertible Neural Networks and thereby generates adversarial examples by simultaneously adding class-specific semantic information of the target class and dropping discriminant information of the original class. Extensive experiments on CIFAR-10, CIFAR-100, and ImageNet-1K demonstrate that the proposed AdvINN method can produce less imperceptible adversarial images than the state-of-the-art methods and AdvINN yields more robust adversarial examples with high confidence compared to other adversarial attacks. Code is available at https://github.com/jjhuangcs/AdvINN.
引用
收藏
页码:414 / 424
页数:11
相关论文
共 56 条
[1]   Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey [J].
Akhtar, Naveed ;
Mian, Ajmal .
IEEE ACCESS, 2018, 6 :14410-14430
[2]  
Ardizzone L, 2019, Arxiv, DOI arXiv:1907.02392
[3]  
Benz P., 2020, AS C COMP VIS
[4]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[5]   IICNet: A Generic Framework for Reversible Image Conversion [J].
Cheng, Ka Leong ;
Xie, Yueqi ;
Chen, Qifeng .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :1971-1980
[6]   Sparse and Imperceivable Adversarial Attacks [J].
Croce, Francesco ;
Hein, Matthias .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :4723-4731
[7]   SHIELD: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression [J].
Das, Nilaksh ;
Shanbhogue, Madhuri ;
Chen, Shang-Tse ;
Hohman, Fred ;
Li, Siwei ;
Chen, Li ;
Kounavis, Michael E. ;
Chau, Duen Horng .
KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, :196-204
[8]  
Dinh L, 2015, Arxiv, DOI [arXiv:1410.8516, 10.48550/arXiv.1410.8516, DOI 10.48550/ARXIV.1410.8516]
[9]  
Dinh L, 2017, Arxiv, DOI arXiv:1605.08803
[10]  
Dolatabadi HM, 2020, ADV NEUR IN, V33