Defending Video Recognition Model Against Adversarial Perturbations via Defense Patterns

被引:0
|
作者
Lee, Hong Joo [1 ]
Ro, Yong Man [1 ]
机构
[1] Korea Adv Inst Sci & Technol KAIST, Sch Elect Engn, Image & Video Syst Lab, Daejeon 34141, South Korea
关键词
Computational modeling; Perturbation methods; Adaptation models; Training; Analytical models; Predictive models; Pattern recognition; Defense patterns (DPs); robust video recognition; video adversarial defense; ROBUSTNESS; ENSEMBLE;
D O I
10.1109/TDSC.2023.3346064
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep Neural Networks (DNNs) have been widely successful in various domains, but they are vulnerable to adversarial attacks. Recent studies have also demonstrated that video recognition models are susceptible to adversarial perturbations, but the existing defense strategies in the image domain do not transfer well to the video domain due to the lack of considering temporal development and require a high computational cost for training video recognition models. This article, first, investigates the temporal vulnerability of video recognition models by quantifying the effect of temporal perturbations on the model's performance. Based on these investigations, we propose Defense Patterns (DPs) that can effectively protect video recognition models by adding them to the input video frames. The DPs are generated on top of a pre-trained model, eliminating the need for retraining or fine-tuning, which significantly reduces the computational cost. Experimental results on two benchmark datasets and various action recognition models demonstrate the effectiveness of the proposed method in enhancing the robustness of video recognition models.
引用
收藏
页码:4110 / 4121
页数:12
相关论文
共 17 条
  • [1] Fight Perturbations With Perturbations: Defending Adversarial Attacks via Neuron Influence
    Chen, Ruoxi
    Jin, Haibo
    Zheng, Haibin
    Chen, Jinyin
    Liu, Zhenguang
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2025, 22 (02) : 1582 - 1595
  • [2] DifFilter: Defending Against Adversarial Perturbations With Diffusion Filter
    Chen, Yong
    Li, Xuedong
    Hu, Peng
    Peng, Dezhong
    Wang, Xu
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6779 - 6794
  • [3] Approximate Manifold Defense Against Multiple Adversarial Perturbations
    Nandy, Jay
    Hsu, Wynne
    Lee, Mong Li
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [4] Mape: defending against transferable adversarial attacks using multi-source adversarial perturbations elimination
    Liu, Xinlei
    Xie, Jichao
    Hu, Tao
    Yi, Peng
    Hu, Yuxiang
    Huo, Shumin
    Zhang, Zhen
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (02)
  • [5] Scattering Model Guided Adversarial Examples for SAR Target Recognition: Attack and Defense
    Peng, Bowen
    Peng, Bo
    Zhou, Jie
    Xie, Jianyue
    Liu, Li
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [6] Defending Against Adversarial Examples via Soft Decision Trees Embedding
    Hua, Yingying
    Ge, Shiming
    Gao, Xindi
    Jin, Xin
    Zeng, Dan
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 2106 - 2114
  • [7] No-Box Universal Adversarial Perturbations Against Image Classifiers via Artificial Textures
    Mou, Ningping
    Guo, Binqing
    Zhao, Lingchen
    Wang, Cong
    Zhao, Yue
    Wang, Qian
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 9803 - 9818
  • [8] One Parameter Defense-Defending Against Data Inference Attacks via Differential Privacy
    Ye, Dayong
    Shen, Sheng
    Zhu, Tianqing
    Liu, Bo
    Zhou, Wanlei
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 1466 - 1480
  • [9] Defense against Adversarial Attacks in Image Recognition Based on Multilayer Filters
    Wang, Mingde
    Liu, Zhijing
    APPLIED SCIENCES-BASEL, 2024, 14 (18):
  • [10] Generative adversarial defense via conditional diffusion model
    Shi, Xiaowen
    Zhou, Chao
    Wang, Yuan-Gen
    MULTIMEDIA SYSTEMS, 2025, 31 (01)