Machine Unlearning

被引:299
作者
Bourtoule, Lucas [1 ,2 ]
Chandrasekaran, Varun [3 ]
Choquette-Choo, Christopher A. [1 ,2 ]
Jia, Hengrui [1 ,2 ]
Travers, Adelin [1 ,2 ]
Zhang, Baiwu [1 ,2 ]
Lie, David [1 ]
Papernot, Nicolas [1 ,2 ]
机构
[1] Univ Toronto, Toronto, ON, Canada
[2] Vector Inst, Toronto, ON, Canada
[3] Univ Wisconsin, Madison, WI 53706 USA
来源
2021 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP | 2021年
基金
加拿大自然科学与工程研究理事会; 美国国家科学基金会;
关键词
PROTECTION;
D O I
10.1109/SP40001.2021.00019
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Once users have shared their data online, it is generally difficult for them to revoke access and ask for the data to be deleted. Machine learning (ML) exacerbates this problem because any model trained with said data may have memorized it, putting users at risk of a successful privacy attack exposing their information. Yet, having models unlearn is notoriously difficult. We introduce SISA training, a framework that expedites the unlearning process by strategically limiting the influence of a data point in the training procedure. While our framework is applicable to any learning algorithm, it is designed to achieve the largest improvements for stateful algorithms like stochastic gradient descent for deep neural networks. SISA training reduces the computational overhead associated with unlearning, even in the worst-case setting where unlearning requests are made uniformly across the training set. In some cases, the service provider may have a prior on the distribution of unlearning requests that will be issued by users. We may take this prior into account to partition and order data accordingly, and further decrease overhead from unlearning. Our evaluation spans several datasets from different domains, with corresponding motivations for unlearning. Under no distributional assumptions, for simple learning tasks, we observe that SISA training improves time to unlearn points from the Purchase dataset by 4.63x, and 2.45x for the SVHN dataset, over retraining from scratch. SISA training also provides a speed-up of 1.36x in retraining for complex learning tasks such as ImageNet classification; aided by transfer learning, this results in a small degradation in accuracy. Our work contributes to practical data governance in machine unlearning.
引用
收藏
页码:141 / 159
页数:19
相关论文
共 61 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Baykal C, 2019, Arxiv, DOI arXiv:1804.05345
[3]   Demystifying Parallel and Distributed Deep Learning: An In-depth Concurrency Analysis [J].
Ben-Nun, Tal ;
Hoefler, Torsten .
ACM COMPUTING SURVEYS, 2019, 52 (04)
[4]   Five Years of the Right to be Forgotten [J].
Bertram, Theo ;
Bursztein, Elie ;
Caro, Stephanie ;
Chao, Hubert ;
Feman, Rutledge Chin ;
Fleischer, Peter ;
Gustafsson, Albin ;
Hemerly, Jess ;
Hibbert, Chris ;
Invernizzi, Luca ;
Donnelly, Lanah Kammourieh ;
Ketover, Jason ;
Laefer, Jay ;
Nicholas, Paul ;
Niu, Yuan ;
Obhi, Harjinder ;
Price, David ;
Strait, Andrew ;
Thomas, Kurt ;
Al Verney .
PROCEEDINGS OF THE 2019 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'19), 2019, :959-972
[5]  
Biggio B., 2012, arXiv
[6]   Towards Making Systems Forget with Machine Unlearning [J].
Cao, Yinzhi ;
Yang, Junfeng .
2015 IEEE SYMPOSIUM ON SECURITY AND PRIVACY SP 2015, 2015, :463-480
[7]  
Carlini N, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P267
[8]  
Chaudhuri K., 2008, Advances in Neural Information Processing Systems, V8, P289
[9]  
Chaudhuri K, 2011, J MACH LEARN RES, V12, P1069
[10]  
COOK RD, 1980, TECHNOMETRICS, V22, P495