Understanding the vulnerability of skeleton-based Human Activity Recognition via black-box attack

被引:1
作者
Diao, Yunfeng [1 ]
Wang, He [2 ]
Shao, Tianjia [3 ]
Yang, Yongliang [4 ]
Zhou, Kun [3 ]
Hogg, David [5 ]
Wang, Meng [1 ]
机构
[1] Hefei Univ Technol, Sch Comp Sci & Informat Engn, Hefei, Peoples R China
[2] UCL, Dept Comp Sci, London, England
[3] Zhejiang Univ, State Key Lab CAD&CG, Hangzhou, Peoples R China
[4] Univ Bath, Dept Comp Sci, Bath, England
[5] Univ Leeds, Sch Comp, Leeds, England
基金
英国工程与自然科学研究理事会;
关键词
Black-box attack; Skeletal action recognition; Adversarial robustness; On-manifold adversarial samples;
D O I
10.1016/j.patcog.2024.110564
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars, where safety and lives are at stake. Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks. However, the proposed attacks require the fullknowledge of the attacked classifier, which is overly restrictive. In this paper, we show such threats indeed exist, even when the attacker only has access to the input/output of the model. To this end, we propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR. BASAR explores the interplay between the classification boundary and the natural motion manifold. To our best knowledge, this is the first time data manifold is introduced in adversarial attacks on time series. Via BASAR, we find on-manifold adversarial samples are extremely deceitful and rather common in skeletal motions, in contrast to the common belief that adversarial samples only exist off-manifold. Through exhaustive evaluation, we show that BASAR can deliver successful attacks across classifiers, datasets, and attack modes. By attack, BASAR helps identify the potential causes of the model vulnerability and provides insights on possible improvements. Finally, to mitigate the newly identified threat, we propose a new adversarial training approach by leveraging the sophisticated distributions of on/off-manifold adversarial samples, called mixed manifold-based adversarial training (MMAT). MMAT can successfully help defend against adversarial attacks without compromising classification accuracy.
引用
收藏
页数:10
相关论文
共 44 条
  • [1] Addepalli Sravanti, 2022, Advances in Neural Information Processing Systems, V35, P1488
  • [2] Athalye A, 2018, PR MACH LEARN RES, V80
  • [3] Defense strategies for Adversarial Machine Learning: A survey
    Bountakas, Panagiotis
    Zarras, Apostolis
    Lekidis, Alexios
    Xenakis, Christos
    [J]. COMPUTER SCIENCE REVIEW, 2023, 49
  • [4] Brendel W., 2018, 6 INT C LEARN REPR I
  • [5] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [6] Carreira J, 2019, Arxiv, DOI arXiv:1907.06987
  • [7] RayS: A Ray Searching Method for Hard-label Adversarial Attack
    Chen, Jinghui
    Gu, Quanquan
    [J]. KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 1739 - 1747
  • [8] Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition
    Chen, Yuxin
    Zhang, Ziqi
    Yuan, Chunfeng
    Li, Bing
    Deng, Ying
    Hu, Weiming
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 13339 - 13348
  • [9] Cohen J, 2019, PR MACH LEARN RES, V97
  • [10] BASAR:Black-box Attack on Skeletal Action Recognition
    Diao, Yunfeng
    Shao, Tianjia
    Yang, Yong-Liang
    Zhou, Kun
    Wang, He
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 7593 - 7603