Segmentation Fault: A Cheap Defense Against Adversarial Machine Learning

被引:1
|
作者
Al Bared, Doha [1 ]
Nassar, Mohamed [2 ]
机构
[1] Amer Univ Beirut AUB, Dept Comp Sci, Beirut, Lebanon
[2] Univ New Haven, Dept Comp Sci, West Haven, CT USA
关键词
Machine Learning; Adversarial ML; Neural Networks; Computer Vision;
D O I
10.1109/MENACOMM50742.2021.9678308
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently published attacks against deep neural networks (DNNs) have stressed the importance of methodologies and tools to assess the security risks of using this technology in critical systems. Efficient techniques for detecting adversarial machine learning helps establishing trust and boost the adoption of deep learning in sensitive and security systems. In this paper, we propose a new technique for defending deep neural network classifiers, and convolutional ones in particular. Our defense is cheap in the sense that it requires less computation power despite a small cost to pay in terms of detection accuracy. The work refers to a recently published technique called ML-LOO. We replace the costly pixel by pixel leave-one-out approach of ML-LOO by adopting coarse-grained leave-one-out. We evaluate and compare the efficiency of different segmentation algorithms for this task. Our results show that a large gain in efficiency is possible, even though penalized by a marginal decrease in detection accuracy.
引用
收藏
页码:37 / 42
页数:6
相关论文
共 50 条
  • [31] eXplainable and Reliable Against Adversarial Machine Learning in Data Analytics
    Vaccari, Ivan
    Carlevaro, Alberto
    Narteni, Sara
    Cambiaso, Enrico
    Mongelli, Maurizio
    IEEE ACCESS, 2022, 10 : 83949 - 83970
  • [32] Robust Machine Learning against Adversarial Samples at Test Time
    Lin, Jing
    Njilla, Laurent L.
    Xiong, Kaiqi
    ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [33] Secure machine learning against adversarial samples at test time
    Lin, Jing
    Njilla, Laurent L.
    Xiong, Kaiqi
    EURASIP JOURNAL ON INFORMATION SECURITY, 2022, 2022 (01)
  • [34] Defense against Universal Adversarial Perturbations
    Akhtar, Naveed
    Liu, Jian
    Mian, Ajmal
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3389 - 3398
  • [35] Deblurring as a Defense against Adversarial Attacks
    Duckworth, William, III
    Liao, Weixian
    Yu, Wei
    2023 IEEE 12TH INTERNATIONAL CONFERENCE ON CLOUD NETWORKING, CLOUDNET, 2023, : 61 - 67
  • [36] Adversarial Machine Learning
    Tygar, J. D.
    IEEE INTERNET COMPUTING, 2011, 15 (05) : 4 - 6
  • [37] ASCL: Adversarial supervised contrastive learning for defense against word substitution attacks
    Shi, Jiahui
    Li, Linjing
    Zeng, Daniel
    NEUROCOMPUTING, 2022, 510 : 59 - 68
  • [38] Instance-based defense against adversarial attacks in Deep Reinforcement Learning
    Garcia, Javier
    Sagredo, Ismael
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 107
  • [39] Defense against Adversarial Patch Attacks for Aerial Image Semantic Segmentation by Robust Feature Extraction
    Wang, Zhen
    Wang, Buhong
    Zhang, Chuanlei
    Liu, Yaohui
    REMOTE SENSING, 2023, 15 (06)
  • [40] Defense Strategies Against Adversarial Jamming Attacks via Deep Reinforcement Learning
    Wang, Feng
    Zhong, Chen
    Gursoy, M. Cenk
    Velipasalar, Senem
    2020 54TH ANNUAL CONFERENCE ON INFORMATION SCIENCES AND SYSTEMS (CISS), 2020, : 336 - 341