Context-Guided Black-Box Attack for Visual Tracking

被引:0
|
作者
Huang, Xingsen [1 ,2 ]
Miao, Deshui [1 ]
Wang, Hongpeng [1 ,2 ]
Wang, Yaowei [2 ]
Li, Xin [2 ]
机构
[1] Harbin Inst Technol Shenzhen, Sch Comp Sci & Technol, Shenzhen 518055, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518055, Peoples R China
基金
中国国家自然科学基金;
关键词
Target tracking; Feature extraction; Visualization; Transformers; Interference; Image reconstruction; Robustness; Visual tracking; adversarial attack;
D O I
10.1109/TMM.2024.3382473
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the recent advancement of deep neural networks, visual tracking has achieved substantial progress in tracking accuracy. However, the robustness and security of tracking methods developed based on current deep models have not been thoroughly explored, a critical consideration for real-world applications. In this study, we propose a context-guided black-box attack method to investigate the robustness of recent advanced deep trackers against spatial and temporal interference. For spatial interference, the proposed algorithm generates adversarial target samples by mixing the information of the target object and the similar background regions around it in an embedded feature space of an encoder-decoder model, which evaluates the ability of trackers to handle background distractors. For temporal interference, we use the target state in the previous frame to generate the adversarial sample, which easily fools the trackers that rely too heavily on tracking prior assumptions, such as that the appearance changes and movements of a video target object are small between two consecutive frames. We assess the proposed attack method under both CNN-based and transformer-based tracking frameworks on four diverse datasets: OTB100, VOT2018, GOT-10 k, and LaSOT. The experimental results demonstrate that our approach substantially deteriorates the performance of all these deep trackers across numerous datasets, even in the black-box attack mode. This reveals the weak robustness of recent deep tracking methods against background distractors and prior dependencies.
引用
收藏
页码:8824 / 8835
页数:12
相关论文
共 50 条
  • [41] Visual explanation of black-box model: Similarity Difference and Uniqueness (SIDU) method
    Muddamsetty, Satya M. B.
    Jahromi, Mohammad N. S.
    Ciontos, Andreea E.
    Fenoy, Laura M.
    Moeslund, Thomas B.
    PATTERN RECOGNITION, 2022, 127
  • [42] Graph Attention Network for Context-Aware Visual Tracking
    Shao, Yanyan
    Guo, Dongyan
    Cui, Ying
    Wang, Zhenhua
    Zhang, Liyan
    Zhang, Jianhua
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [43] CRNet: Context-guided Reasoning Network for Detecting Hard Objects
    Leng, Jiaxu
    Liu, Yiran
    Gao, Xinbo
    Wang, Zhihui
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 3765 - 3777
  • [44] Query-Efficient Black-Box Adversarial Attacks on Automatic Speech Recognition
    Tong, Chuxuan
    Zheng, Xi
    Li, Jianhua
    Ma, Xingjun
    Gao, Longxiang
    Xiang, Yong
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 3981 - 3992
  • [45] Semantic-Aware Adaptive Binary Search for Hard-Label Black-Box Attack
    Ma, Yiqing
    Lucke, Kyle
    Xian, Min
    Vakanski, Aleksandar
    COMPUTERS, 2024, 13 (08)
  • [46] Effectively Improving Data Diversity of Substitute Training for Data-Free Black-Box Attack
    Wei, Yang
    Ma, Zhuo
    Ma, Zhuoran
    Qin, Zhan
    Liu, Yang
    Xiao, Bin
    Bi, Xiuli
    Ma, Jianfeng
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (04) : 4206 - 4219
  • [47] DifAttack: Query-Efficient Black-Box Adversarial Attack via Disentangled Feature Space
    Liu, Jun
    Zhou, Jiantao
    Zeng, Jiandian
    Tian, Jinyu
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 4, 2024, : 3666 - 3674
  • [48] A New Meta-learning-based Black-box Adversarial Attack: SA-CC
    Ding, Jianyu
    Chen, Zhiyu
    2022 34TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2022, : 4326 - 4331
  • [49] FABRICATE-VANISH: AN EFFECTIVE AND TRANSFERABLE BLACK-BOX ADVERSARIAL ATTACK INCORPORATING FEATURE DISTORTION
    Lu, Yantao
    Du, Xueying
    Sun, Bingkun
    Ren, Haining
    Velipasalar, Senem
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 809 - 813
  • [50] Exploring the vulnerability of black-box adversarial attack on prompt-based learning in language models
    Zihao Tan
    Qingliang Chen
    Wenbin Zhu
    Yongjian Huang
    Chen Liang
    Neural Computing and Applications, 2025, 37 (3) : 1457 - 1473