Mixed-Privacy Forgetting in Deep Networks

被引:56
|
作者
Golatkar, Aditya [1 ,2 ]
Achille, Alessandro [1 ]
Ravichandran, Avinash [1 ]
Polito, Marzia [1 ]
Soatto, Stefano [1 ]
机构
[1] Amazon Web Serv, Seattle, WA 98109 USA
[2] Univ Calif Los Angeles, Los Angeles, CA 90024 USA
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021 | 2021年
关键词
D O I
10.1109/CVPR46437.2021.00085
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We show that the influence of a subset of the training samples can be removed - or "forgotten" - from the weights of a network trained on large-scale image classification tasks, and we provide strong computable bounds on the amount of remaining information after forgetting. Inspired by real-world applications of forgetting techniques, we introduce a novel notion of forgetting in mixed-privacy setting, where we know that a "core" subset of the training samples does not need to be forgotten. While this variation of the problem is conceptually simple, we show that working in this setting significantly improves the accuracy and guarantees of forgetting methods applied to vision classification tasks. Moreover, our method allows efficient removal of all information contained in non-core data by simply setting to zero a subset of the weights with minimal loss in performance. We achieve these results by replacing a standard deep network with a suitable linear approximation. With opportune changes to the network architecture and training procedure, we show that such linear approximation achieves comparable performance to the original network and that the forgetting problem becomes quadratic and can be solved efficiently even for large models. Unlike previous forgetting methods on deep networks, ours can achieve close to the state-of-the-art accuracy on large scale vision tasks. In particular, we show that our method allows forgetting without having to trade off the model accuracy.
引用
收藏
页码:792 / 801
页数:10
相关论文
共 50 条
  • [21] Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks Using an Incompetent Teacher
    Chundawat, Vikram S.
    Tarun, Ayush K.
    Mandal, Murari
    Kankanhalli, Mohan
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 6, 2023, : 7210 - 7217
  • [22] Catastrophic forgetting in connectionist networks
    French, RM
    TRENDS IN COGNITIVE SCIENCES, 1999, 3 (04) : 128 - 135
  • [23] A deep decentralized privacy-preservation framework for online social networks
    Frimpong, Samuel Akwasi
    Han, Mu
    Effah, Emmanuel Kwame
    Adjei, Joseph Kwame
    Hanson, Isaac
    Brown, Percy
    BLOCKCHAIN-RESEARCH AND APPLICATIONS, 2024, 5 (04):
  • [24] Differential Privacy Enabled Deep Neural Networks for Wireless Resource Management
    Md Habibur Rahman
    Md Munjure Mowla
    Shahriar Shanto
    Mobile Networks and Applications, 2022, 27 : 2153 - 2162
  • [25] Differential Privacy Enabled Deep Neural Networks for Wireless Resource Management
    Rahman, Md Habibur
    Mowla, Md Munjure
    Shanto, Shahriar
    MOBILE NETWORKS & APPLICATIONS, 2022, 27 (05): : 2153 - 2162
  • [26] A Deep Learning Cache Framework for Privacy Security on Heterogeneous IoT Networks
    Li, Jian
    Feng, Meng
    Li, Shuai
    IEEE ACCESS, 2024, 12 : 93261 - 93269
  • [27] Deep neural networks and mixed integer linear optimization
    Matteo Fischetti
    Jason Jo
    Constraints, 2018, 23 : 296 - 309
  • [28] Deep neural networks and mixed integer linear optimization
    Fischetti, Matteo
    Jo, Jason
    CONSTRAINTS, 2018, 23 (03) : 296 - 309
  • [29] Mixed Evidence for Gestalt Grouping in Deep Neural Networks
    Biscione V.
    Bowers J.S.
    Computational Brain & Behavior, 2023, 6 (3) : 438 - 456
  • [30] Deep (Un)Learning: Using Neural Networks to Model Retention and Forgetting in an Adaptive Learning System
    Matayoshi, Jeffrey
    Uzun, Hasan
    Cosyn, Eric
    ARTIFICIAL INTELLIGENCE IN EDUCATION (AIED 2019), PT I, 2019, 11625 : 258 - 269