I Know What You Trained Last Summer: A Survey on Stealing Machine Learning Models and Defences

被引:37
|
作者
Oliynyk, Daryna [1 ]
Mayer, Rudolf [1 ,2 ]
Rauber, Andreas [2 ]
机构
[1] SBA Res, Vienna, Austria
[2] Vienna Univ Technol, A-1040 Vienna, Austria
关键词
Machine learning; model stealing; model extraction; ATTACKS;
D O I
10.1145/3595292
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Machine-Learning-as-a-Service (MLaaS) has become a widespread paradigm, making even the most complex Machine Learning models available for clients via, e.g., a pay-per-query principle. This allows users to avoid time-consuming processes of data collection, hyperparameter tuning, and model training. However, by giving their customers access to the (predictions of their) models, MLaaS providers endanger their intellectual property such as sensitive training data, optimised hyperparameters, or learned model parameters. In some cases, adversaries can create a copy of the model with (almost) identical behaviour using the the prediction labels only. While many variants of this attack have been described, only scattered defence strategies that address isolated threats have been proposed. To arrive at a comprehensive understanding why these attacks are successful and how they could be holistically defended against, a thorough systematisation of the field of model stealing is necessary. We address this by categorising and comparing model stealing attacks, assessing their performance, and exploring corresponding defence techniques in different settings. We propose a taxonomy for attack and defence approaches and provide guidelines on how to select the right attack or defence strategy based on the goal and available resources. Finally, we analyse which defences are rendered less effective by current attack strategies.
引用
收藏
页数:41
相关论文
共 3 条
  • [1] I Still Know What You Did Last Summer: Inferring Sensitive User Activities on Messaging Applications Through Traffic Analysis
    Bozorgi, Ardavan
    Bahramali, Alireza
    Rezaei, Fateme
    Ghafari, Amirhossein
    Houmansadr, Amir
    Soltani, Ramin
    Goeckel, Dennis
    Towsley, Don
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (05) : 4135 - 4153
  • [2] Too Big to FAIL: What You Need to Know Before Attacking a Machine Learning System
    Dumitras, Tudor
    Kaya, Yigitcan
    Marginean, Radu
    Suciu, Octavian
    SECURITY PROTOCOLS XXVI, 2018, 11286 : 150 - 162
  • [3] Video Commentary & Machine Learning: Tell Me What You See, I Tell You Who You Are
    Baloul, Mohamed S.
    Yeh, Vicky J. -H.
    Mukhtar, Fareeda
    Ramachandran, Dhanya
    Traynor, Michael D., Jr.
    Shaikh, Nizamuddin
    Rivera, Mariela
    Farley, David R.
    JOURNAL OF SURGICAL EDUCATION, 2022, 79 (06) : E263 - E272