AdvAttackVis: An Adversarial Attack Visualization System for Deep Neural Networks

被引:0
|
作者
Ding Wei-jie [1 ,2 ]
Shen Xuchen [3 ]
Yuan Ying [1 ]
Mao Ting-yun [4 ]
Sun Guo-dao [5 ]
Chen Li-li [4 ]
Chen Bing-ting [6 ]
机构
[1] Zhejiang Police Coll, Dept Comp & Informat Secur, Hangzhou 310053, Peoples R China
[2] Minist Publ Secur, Key Lab Publ Secur Informat Applicat Based Big Da, Hangzhou 310053, Peoples R China
[3] Hangzhou Publ Secur Bur, Xiaoshan Dist Branch, Hangzhou 310053, Peoples R China
[4] Zhejiang Dahua Technol Co Ltd, Hangzhou 310053, Peoples R China
[5] Zhejiang Univ Technol, Coll Comp Sci & Technol, Hangzhou 310023, Peoples R China
[6] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 210016, Peoples R China
关键词
Deep learning; deep neural networks; adversarial attacks; adversarial examples; interactive visualization;
D O I
10.14569/IJACSA.2024.0150538
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Deep learning has been widely used in various scenarios such as image classification, natural language processing, and speech recognition. However, deep neural networks are vulnerable to adversarial attacks, resulting in incorrect predictions. Adversarial attacks involve generating adversarial examples and attacking a target model. The generation mechanism of adversarial examples and the prediction principle of the target model for adversarial examples are complicated, which makes it difficult for deep learning users to understand adversarial attacks. In this paper, we present an adversarial attack visualization system called AdvAttackVis to assist users in learning, understanding, and exploring adversarial attacks. Based on the designed interactive visualization interface, the system enables users to train and analyze adversarial attack models, understand the principles of adversarial attacks, analyze the results of attacks on the target model, and explore the prediction mechanism of the target model for adversarial examples. Through real case studies on adversarial attacks, we demonstrate the usability and effectiveness of the proposed visualization system.
引用
收藏
页码:383 / 391
页数:9
相关论文
共 50 条
  • [1] ADVERSARIAL WATERMARKING TO ATTACK DEEP NEURAL NETWORKS
    Wang, Gengxing
    Chen, Xinyuan
    Xu, Chang
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1962 - 1966
  • [2] Cocktail Universal Adversarial Attack on Deep Neural Networks
    Li, Shaoxin
    Li, Xiaofeng
    Che, Xin
    Li, Xintong
    Zhang, Yong
    Chu, Lingyang
    COMPUTER VISION - ECCV 2024, PT LXV, 2025, 15123 : 396 - 412
  • [3] Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks
    Kwon, Hyun
    Lee, Jun
    SYMMETRY-BASEL, 2021, 13 (03):
  • [4] ADMM Attack: An Enhanced Adversarial Attack for Deep Neural Networks with Undetectable Distortions
    Zhao, Pu
    Xu, Kaidi
    Liu, Sijia
    Wang, Yanzhi
    Lin, Xue
    24TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE (ASP-DAC 2019), 2019, : 499 - 505
  • [5] Priority Adversarial Example in Evasion Attack on Multiple Deep Neural Networks
    Kwon, Hyun
    Yoon, Hyunsoo
    Choi, Daeseon
    2019 1ST INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE IN INFORMATION AND COMMUNICATION (ICAIIC 2019), 2019, : 399 - 404
  • [6] Understanding Adversarial Attack and Defense towards Deep Compressed Neural Networks
    Liu, Qi
    Liu, Tao
    Wen, Wujie
    CYBER SENSING 2018, 2018, 10630
  • [7] Cyclical Adversarial Attack Pierces Black-box Deep Neural Networks
    Huang, Lifeng
    Wei, Shuxin
    Gao, Chengying
    Liu, Ning
    PATTERN RECOGNITION, 2022, 131
  • [8] Query efficient black-box adversarial attack on deep neural networks
    Bai, Yang
    Wang, Yisen
    Zeng, Yuyuan
    Jiang, Yong
    Xia, Shu-Tao
    PATTERN RECOGNITION, 2023, 133
  • [9] Invisible Adversarial Attack against Deep Neural Networks: An Adaptive Penalization Approach
    Wang, Zhibo
    Song, Mengkai
    Zheng, Siyan
    Zhang, Zhifei
    Song, Yang
    Wang, Qian
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (03) : 1474 - 1488
  • [10] Invisible Adversarial Attack against Deep Neural Networks: An Adaptive Penalization Approach
    Wang, Zhibo
    Song, Mengkai
    Zheng, Siyan
    Zhang, Zhifei
    Song, Yang
    Wang, Qian
    IEEE Transactions on Dependable and Secure Computing, 2021, 18 (03): : 1474 - 1488