Context-Guided Black-Box Attack for Visual Tracking

被引:0
|
作者
Huang, Xingsen [1 ,2 ]
Miao, Deshui [1 ]
Wang, Hongpeng [1 ,2 ]
Wang, Yaowei [2 ]
Li, Xin [2 ]
机构
[1] Harbin Inst Technol Shenzhen, Sch Comp Sci & Technol, Shenzhen 518055, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
Target tracking; Feature extraction; Visualization; Transformers; Interference; Image reconstruction; Robustness; Visual tracking; adversarial attack;
D O I
10.1109/TMM.2024.3382473
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the recent advancement of deep neural networks, visual tracking has achieved substantial progress in tracking accuracy. However, the robustness and security of tracking methods developed based on current deep models have not been thoroughly explored, a critical consideration for real-world applications. In this study, we propose a context-guided black-box attack method to investigate the robustness of recent advanced deep trackers against spatial and temporal interference. For spatial interference, the proposed algorithm generates adversarial target samples by mixing the information of the target object and the similar background regions around it in an embedded feature space of an encoder-decoder model, which evaluates the ability of trackers to handle background distractors. For temporal interference, we use the target state in the previous frame to generate the adversarial sample, which easily fools the trackers that rely too heavily on tracking prior assumptions, such as that the appearance changes and movements of a video target object are small between two consecutive frames. We assess the proposed attack method under both CNN-based and transformer-based tracking frameworks on four diverse datasets: OTB100, VOT2018, GOT-10 k, and LaSOT. The experimental results demonstrate that our approach substantially deteriorates the performance of all these deep trackers across numerous datasets, even in the black-box attack mode. This reveals the weak robustness of recent deep tracking methods against background distractors and prior dependencies.
引用
收藏
页码:8824 / 8835
页数:12
相关论文
共 50 条
  • [1] Universal Low-Frequency Noise Black-Box Attack on Visual Object Tracking
    Hou, Hanting
    Bao, Huan
    Wei, Kaimin
    Wu, Yongdong
    SYMMETRY-BASEL, 2025, 17 (03):
  • [2] SIMULATOR ATTACK plus FOR BLACK-BOX ADVERSARIAL ATTACK
    Ji, Yimu
    Ding, Jianyu
    Chen, Zhiyu
    Wu, Fei
    Zhang, Chi
    Sun, Yiming
    Sun, Jing
    Liu, Shangdong
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 636 - 640
  • [3] A Practical Black-Box Attack on Source Code Authorship Identification Classifiers
    Liu, Qianjun
    Ji, Shouling
    Liu, Changchang
    Wu, Chunming
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 : 3620 - 3633
  • [4] Restricted Black-Box Adversarial Attack Against DeepFake Face Swapping
    Dong, Junhao
    Wang, Yuan
    Lai, Jianhuang
    Xie, Xiaohua
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 2596 - 2608
  • [5] Adaptive hyperparameter optimization for black-box adversarial attack
    Zhenyu Guan
    Lixin Zhang
    Bohan Huang
    Bihe Zhao
    Song Bian
    International Journal of Information Security, 2023, 22 : 1765 - 1779
  • [6] An Effective Way to Boost Black-Box Adversarial Attack
    Feng, Xinjie
    Yao, Hongxun
    Che, Wenbin
    Zhang, Shengping
    MULTIMEDIA MODELING (MMM 2020), PT I, 2020, 11961 : 393 - 404
  • [7] Toward Visual Distortion in Black-Box Attacks
    Li, Nannan
    Chen, Zhenzhong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 6156 - 6167
  • [8] Adaptive hyperparameter optimization for black-box adversarial attack
    Guan, Zhenyu
    Zhang, Lixin
    Huang, Bohan
    Zhao, Bihe
    Bian, Song
    INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2023, 22 (06) : 1765 - 1779
  • [9] Transferable Black Box Attack on Visual Object Tracking Based on Important Features
    Yao R.
    Zhu X.-B.
    Zhou Y.
    Wang P.
    Zhang Y.-N.
    Zhao J.-Q.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2023, 51 (04): : 826 - 834
  • [10] Boosting Black-Box Attack to Deep Neural Networks With Conditional Diffusion Models
    Liu, Renyang
    Zhou, Wei
    Zhang, Tianwei
    Chen, Kangjie
    Zhao, Jun
    Lam, Kwok-Yan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 5207 - 5219