Segmentation Fault: A Cheap Defense Against Adversarial Machine Learning

被引:1
|
作者
Al Bared, Doha [1 ]
Nassar, Mohamed [2 ]
机构
[1] Amer Univ Beirut AUB, Dept Comp Sci, Beirut, Lebanon
[2] Univ New Haven, Dept Comp Sci, West Haven, CT USA
关键词
Machine Learning; Adversarial ML; Neural Networks; Computer Vision;
D O I
10.1109/MENACOMM50742.2021.9678308
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently published attacks against deep neural networks (DNNs) have stressed the importance of methodologies and tools to assess the security risks of using this technology in critical systems. Efficient techniques for detecting adversarial machine learning helps establishing trust and boost the adoption of deep learning in sensitive and security systems. In this paper, we propose a new technique for defending deep neural network classifiers, and convolutional ones in particular. Our defense is cheap in the sense that it requires less computation power despite a small cost to pay in terms of detection accuracy. The work refers to a recently published technique called ML-LOO. We replace the costly pixel by pixel leave-one-out approach of ML-LOO by adopting coarse-grained leave-one-out. We evaluate and compare the efficiency of different segmentation algorithms for this task. Our results show that a large gain in efficiency is possible, even though penalized by a marginal decrease in detection accuracy.
引用
收藏
页码:37 / 42
页数:6
相关论文
共 50 条
  • [21] Making Machine Learning Robust Against Adversarial Inputs
    Goodfellow, Ian
    McDaniel, Patrick
    Papernot, Nicolas
    COMMUNICATIONS OF THE ACM, 2018, 61 (07) : 56 - 66
  • [22] Securing Pervasive Systems Against Adversarial Machine Learning
    Lagesse, Brent
    Burkard, Cody
    Perez, Julio
    2016 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATION WORKSHOPS (PERCOM WORKSHOPS), 2016,
  • [23] Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
    Rosenberg, Ishai
    Shabtai, Asaf
    Elovici, Yuval
    Rokach, Lior
    ACM COMPUTING SURVEYS, 2021, 54 (05)
  • [24] Joint contrastive learning and frequency domain defense against adversarial examples
    Yang, Jin
    Li, Zhi
    Liu, Shuaiwei
    Hong, Bo
    Wang, Weidong
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (25): : 18623 - 18639
  • [25] Dynamic Cheap Talk for Robust Adversarial Learning
    Li, Zuxing
    Dan, Gyorgy
    DECISION AND GAME THEORY FOR SECURITY, 2019, 11836 : 297 - 309
  • [26] Joint contrastive learning and frequency domain defense against adversarial examples
    Jin Yang
    Zhi Li
    Shuaiwei Liu
    Bo Hong
    Weidong Wang
    Neural Computing and Applications, 2023, 35 : 18623 - 18639
  • [27] Text Adversarial Purification as Defense against Adversarial Attacks
    Li, Linyang
    Song, Demin
    Qiu, Xipeng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 338 - 350
  • [28] Fortifying Machine Learning-Powered Intrusion Detection: A Defense Strategy Against Adversarial Black-Box Attacks
    Pujari, Medha
    Sun, Weiqing
    PROCEEDINGS OF NINTH INTERNATIONAL CONGRESS ON INFORMATION AND COMMUNICATION TECHNOLOGY, VOL 5, ICICT 2024, 2024, 1000 : 655 - 671
  • [29] Bridging Machine Learning and Cryptography in Defence Against Adversarial Attacks
    Taran, Olga
    Rezaeifar, Shideh
    Voloshynovskiy, Slava
    COMPUTER VISION - ECCV 2018 WORKSHOPS, PT II, 2019, 11130 : 267 - 279
  • [30] Secure machine learning against adversarial samples at test time
    Jing Lin
    Laurent L. Njilla
    Kaiqi Xiong
    EURASIP Journal on Information Security, 2022