Segmentation Fault: A Cheap Defense Against Adversarial Machine Learning

被引:1
|
作者
Al Bared, Doha [1 ]
Nassar, Mohamed [2 ]
机构
[1] Amer Univ Beirut AUB, Dept Comp Sci, Beirut, Lebanon
[2] Univ New Haven, Dept Comp Sci, West Haven, CT USA
关键词
Machine Learning; Adversarial ML; Neural Networks; Computer Vision;
D O I
10.1109/MENACOMM50742.2021.9678308
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently published attacks against deep neural networks (DNNs) have stressed the importance of methodologies and tools to assess the security risks of using this technology in critical systems. Efficient techniques for detecting adversarial machine learning helps establishing trust and boost the adoption of deep learning in sensitive and security systems. In this paper, we propose a new technique for defending deep neural network classifiers, and convolutional ones in particular. Our defense is cheap in the sense that it requires less computation power despite a small cost to pay in terms of detection accuracy. The work refers to a recently published technique called ML-LOO. We replace the costly pixel by pixel leave-one-out approach of ML-LOO by adopting coarse-grained leave-one-out. We evaluate and compare the efficiency of different segmentation algorithms for this task. Our results show that a large gain in efficiency is possible, even though penalized by a marginal decrease in detection accuracy.
引用
收藏
页码:37 / 42
页数:6
相关论文
共 50 条
  • [41] Adversarial Training Against Adversarial Attacks for Machine Learning-Based Intrusion Detection Systems
    Haroon, Muhammad Shahzad
    Ali, Husnain Mansoor
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 73 (02): : 3513 - 3527
  • [42] Addressing Adversarial Attacks Against Security Systems Based on Machine Learning
    Apruzzese, Giovanni
    Colajanni, Michele
    Ferretti, Luca
    Marchetti, Mirco
    2019 11TH INTERNATIONAL CONFERENCE ON CYBER CONFLICT (CYCON): SILENT BATTLE, 2019, : 383 - 400
  • [43] Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks
    Gurel, Nezihe Merve
    Qi, Xiangyu
    Rimanic, Luka
    Zhang, Ce
    Li, Bo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [44] Privacy Risks of Securing Machine Learning Models against Adversarial Examples
    Song, Liwei
    Shokri, Reza
    Mittal, Prateek
    PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, : 241 - 257
  • [45] Discretization Based Solutions for Secure Machine Learning Against Adversarial Attacks
    Panda, Priyadarshini
    Chakraborty, Indranil
    Roy, Kaushik
    IEEE ACCESS, 2019, 7 : 70157 - 70168
  • [46] Detection of sensors used for adversarial examples against machine learning models
    Kurniawan, Ade
    Ohsita, Yuichi
    Murata, Masayuki
    RESULTS IN ENGINEERING, 2024, 24
  • [47] An Adversarial Machine Learning Model Against Android Malware Evasion Attacks
    Chen, Lingwei
    Hou, Shifu
    Ye, Yanfang
    Chen, Lifei
    WEB AND BIG DATA, 2017, 10612 : 43 - 55
  • [48] Adversarial Machine Learning Attacks Against Video Anomaly Detection Systems
    Mumcu, Furkan
    Doshi, Keval
    Yilmaz, Yasin
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 205 - 212
  • [49] LEGISLATING AUTONOMOUS VEHICLES AGAINST THE BACKDROP OF ADVERSARIAL MACHINE LEARNING FINDINGS
    Van Uytsel, Steven
    2019 8TH IEEE INTERNATIONAL CONFERENCE ON CONNECTED VEHICLES AND EXPO (IIEEE CCVE), 2019,
  • [50] Adversarial Machine Learning in Malware Detection: Arms Race between Evasion Attack and Defense
    Chen, Lingwei
    Ye, Yanfang
    Bourlai, Thirimachos
    2017 EUROPEAN INTELLIGENCE AND SECURITY INFORMATICS CONFERENCE (EISIC), 2017, : 99 - 106